[jira] [Commented] (SOLR-11379) Config API to switch on/off lucene's logging infoStream

2018-02-28 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381649#comment-16381649
 ] 

Amrit Sarkar commented on SOLR-11379:
-

This patch works when infoStream is at least mentioned in the solrconfig.xml, 
disabled / enabled. Looking into how we can add; probably need {{update-}} 
{{create-}} plugins.

> Config API to switch on/off lucene's logging infoStream 
> 
>
> Key: SOLR-11379
> URL: https://issues.apache.org/jira/browse/SOLR-11379
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Reporter: Amrit Sarkar
>Priority: Minor
> Attachments: SOLR-11379.patch
>
>
> To enable infoStream logging into solr, you need to edit solrconfig.xml and 
> reload core;
> We intend to introduce config api to enable/disable infostream logging in 
> near future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11379) Config API to switch on/off lucene's logging infoStream

2018-02-28 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11379:

Attachment: SOLR-11379.patch

> Config API to switch on/off lucene's logging infoStream 
> 
>
> Key: SOLR-11379
> URL: https://issues.apache.org/jira/browse/SOLR-11379
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Reporter: Amrit Sarkar
>Priority: Minor
> Attachments: SOLR-11379.patch
>
>
> To enable infoStream logging into solr, you need to edit solrconfig.xml and 
> reload core;
> We intend to introduce config api to enable/disable infostream logging in 
> near future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12047) Solr 7.x restart can fail to load some cores

2018-02-28 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12047:


 Summary: Solr 7.x restart can fail to load some cores
 Key: SOLR-12047
 URL: https://issues.apache.org/jira/browse/SOLR-12047
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 7.0
Reporter: Varun Thacker


I've seen this with 2 users running Solr 7.2.1 in the last 2 days where a 
restart fails to load some cores on a node. 

 

Here's the stack trace

 

 
{noformat}
date time ERROR 
(coreLoadExecutor-6-thread-2-processing-n:solr-number:8983_solr) [c:name 
s:shard r:core_node130 x:collection_shard_replica] o.a.s.c.ZkController 
org.apache.solr.common.SolrException: coreNodeName core_node130 does not exist 
in shard shard4: 
DocCollection(collection_name//collections/collection_name/state.json/2385)={
..collection state.json ...
}
at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1687)
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1590)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
...
at java.lang.Thread.run(Thread.java:748)
date time ERROR 
(coreContainerWorkExecutor-2-thread-1-processing-n:solr-number:8983_solr) [ ] 
o.a.s.c.CoreContainer Error waiting for SolrCore to be created
java.util.concurrent.ExecutionException: org.apache.solr.common.SolrException: 
Unable to create core [collection_shardX_replica_n129]
...
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.solr.common.SolrException: Unable to create core 
[collection_shardX_replica_n129]
...
... 5 more
Caused by: org.apache.solr.common.SolrException: 
at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1619)
at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1030)
... 7 more{noformat}
I created the Jira saying Solr 7.x since it's tied to legacyCloud being set to 
false by default starting Solr 7.0

 

 

In ZkController#checkStateInZk where the block is only run with 
legacyCloud=false ( L1645 ) we do a waitForState ( L1667 ) and only wait 3 
seconds. If we don't get the desired state the core will fail to load 

 

With big enough clusters this 3 second timeout is too low and we should 
increase it to a large number such that we don't cause core initialization 
failures 

Line reference is from Solr 7.2.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7821) example films data doesn't work consistently with data-driven schema (schemaless)

2018-02-28 Thread Swapnil M Mane (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381574#comment-16381574
 ] 

Swapnil M Mane commented on SOLR-7821:
--

Nice, thanks [~ctargett] :)

> example films data doesn't work consistently with data-driven schema 
> (schemaless)
> -
>
> Key: SOLR-7821
> URL: https://issues.apache.org/jira/browse/SOLR-7821
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Priority: Major
> Attachments: tutorial-add-field.png
>
>
> On 5.2.1, tried to index the films data into a collection
> {code}
> [~/dev/lw/tools/solr-5.2.1]$ bin/solr -cloud
> Waiting to see Solr listening on port 8983 [/]  
> Started Solr server on port 8983 (pid=98797). Happy searching!
> [~/dev/lw/tools/solr-5.2.1]$ bin/solr create -c gettingstarted -shards 2
> Connecting to ZooKeeper at localhost:9983
> Uploading 
> /Users/timpotter/dev/lw/tools/solr-5.2.1/server/solr/configsets/data_driven_schema_configs/conf
>  for config gettingstarted to ZooKeeper at localhost:9983
> Creating new collection 'gettingstarted' using command:
> http://192.168.1.2:8983/solr/admin/collections?action=CREATE=gettingstarted=2=1=2=gettingstarted
> {
>   "responseHeader":{
> "status":0,
> "QTime":2575},
>   "success":{"":{
>   "responseHeader":{
> "status":0,
> "QTime":2367},
>   "core":"gettingstarted_shard2_replica1"}}}
> [~/dev/lw/tools/solr-5.2.1]$ bin/post -c gettingstarted 
> example/films/films.json
> /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java 
> -classpath /Users/timpotter/dev/lw/tools/solr-5.2.1/dist/solr-core-5.2.1.jar 
> -Dauto=yes -Dc=gettingstarted -Ddata=files 
> org.apache.solr.util.SimplePostTool example/films/films.json
> SimplePostTool version 5.0.0
> Posting files to [base] url 
> http://localhost:8983/solr/gettingstarted/update...
> Entering auto mode. File endings considered are 
> xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
> POSTing file films.json (application/json) to [base]
> SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: 
> http://localhost:8983/solr/gettingstarted/update
> SimplePostTool: WARNING: Response: 
> {"responseHeader":{"status":400,"QTime":285},"error":{"msg":"ERROR: 
> [doc=/en/quien_es_el_senor_lopez] Error adding field 'name'='¿Quién es el 
> señor López?' msg=For input string: \"¿Quién es el señor 
> López?\"","code":400}}
> SimplePostTool: WARNING: IOException while reading response: 
> java.io.IOException: Server returned HTTP response code: 400 for URL: 
> http://localhost:8983/solr/gettingstarted/update
> 1 files indexed.
> COMMITting Solr index changes to 
> http://localhost:8983/solr/gettingstarted/update...
> Time spent: 0:00:00.370
> {code}
> In the solr.log, I see:
> {code}
> ERROR - 2015-07-22 21:54:36.395; [gettingstarted shard2 core_node1 
> gettingstarted_shard2_replica1] org.apache.solr.common.SolrException; 
> org.apache.solr.common.SolrException: ERROR: 
> [doc=/en/quien_es_el_senor_lopez] Error adding field 'name'='¿Quién es el 
> señor López?' msg=For input string: "¿Quién es el señor López?"
> at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:176)
> at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
> at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:328)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
> at 
> 

[jira] [Commented] (SOLR-12011) Consistence problem when in-sync replicas are DOWN

2018-02-28 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381510#comment-16381510
 ] 

Cao Manh Dat commented on SOLR-12011:
-

Found a bug in the previous patch. DUP should also skip replica with state == 
DOWN as well 

> Consistence problem when in-sync replicas are DOWN
> --
>
> Key: SOLR-12011
> URL: https://issues.apache.org/jira/browse/SOLR-12011
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12011.patch, SOLR-12011.patch, SOLR-12011.patch
>
>
> Currently, we will meet consistency problem when in-sync replicas are DOWN. 
> For example:
>  1. A collection with 1 shard with 1 leader and 2 replicas
>  2. Nodes contain 2 replicas go down
>  3. The leader receives an update A, success
>  4. The node contains the leader goes down
>  5. 2 replicas come back
>  6. One of them become leader --> But they shouldn't become leader since they 
> missed the update A
> A solution to this issue :
>  * The idea here is using term value of each replica (SOLR-11702) will be 
> enough to tell that a replica received the latest updates or not. Therefore 
> only replicas with the highest term can become the leader.
>  * There are a couple of things need to be done on this issue
>  ** When leader receives the first updates, its term should be changed from 0 
> -> 1, so further replicas added to the same shard won't be able to become 
> leader (their term = 0) until they finish recovery
>  ** For DOWN replicas, the leader should also need to check (in DUP.finish()) 
> that those replicas have term less than leader before return results to users
>  ** Just by looking at term value of replica, it is not enough to tell us 
> that replica is in-sync with leader or not. Because that replica might not 
> finish the recovery process. We need to introduce another flag (stored on 
> shard term node on ZK) to tell us that replica finished recovery or not. It 
> will look like this.
>  *** {"code_node1" : 1, "core_node2" : 0} — (when core_node2 start recovery) 
> --->
>  *** {"core_node1" : 1, "core_node2" : 1, "core_node2_recovering" : 1} — 
> (when core_node2 finish recovery) --->
>  *** {"core_node1" : 1, "core_node2" : 1}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12011) Consistence problem when in-sync replicas are DOWN

2018-02-28 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12011:

Attachment: SOLR-12011.patch

> Consistence problem when in-sync replicas are DOWN
> --
>
> Key: SOLR-12011
> URL: https://issues.apache.org/jira/browse/SOLR-12011
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12011.patch, SOLR-12011.patch, SOLR-12011.patch
>
>
> Currently, we will meet consistency problem when in-sync replicas are DOWN. 
> For example:
>  1. A collection with 1 shard with 1 leader and 2 replicas
>  2. Nodes contain 2 replicas go down
>  3. The leader receives an update A, success
>  4. The node contains the leader goes down
>  5. 2 replicas come back
>  6. One of them become leader --> But they shouldn't become leader since they 
> missed the update A
> A solution to this issue :
>  * The idea here is using term value of each replica (SOLR-11702) will be 
> enough to tell that a replica received the latest updates or not. Therefore 
> only replicas with the highest term can become the leader.
>  * There are a couple of things need to be done on this issue
>  ** When leader receives the first updates, its term should be changed from 0 
> -> 1, so further replicas added to the same shard won't be able to become 
> leader (their term = 0) until they finish recovery
>  ** For DOWN replicas, the leader should also need to check (in DUP.finish()) 
> that those replicas have term less than leader before return results to users
>  ** Just by looking at term value of replica, it is not enough to tell us 
> that replica is in-sync with leader or not. Because that replica might not 
> finish the recovery process. We need to introduce another flag (stored on 
> shard term node on ZK) to tell us that replica finished recovery or not. It 
> will look like this.
>  *** {"code_node1" : 1, "core_node2" : 0} — (when core_node2 start recovery) 
> --->
>  *** {"core_node1" : 1, "core_node2" : 1, "core_node2_recovering" : 1} — 
> (when core_node2 finish recovery) --->
>  *** {"core_node1" : 1, "core_node2" : 1}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-9.0.4) - Build # 1 - Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/1/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

35 tests failed.
FAILED:  org.apache.solr.cloud.ZkControllerTest.testPublishAndWaitForDownStates

Error Message:
The ZkController.publishAndWaitForDownStates should have timed out but it didn't

Stack Trace:
java.lang.AssertionError: The ZkController.publishAndWaitForDownStates should 
have timed out but it didn't
at 
__randomizedtesting.SeedInfo.seed([ED49B38B90A056D3:CA46DF67313719EC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.ZkControllerTest.testPublishAndWaitForDownStates(ZkControllerTest.java:306)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.ZkControllerTest.testPublishAndWaitForDownStates

Error Message:
The ZkController.publishAndWaitForDownStates should have timed out but it didn't

Stack Trace:

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 482 - Still unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/482/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

11 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.pattern.TestSimplePatternSplitTokenizer

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.pattern.TestSimplePatternSplitTokenizer_F25DA17402CBAA4E-001\bttc-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.pattern.TestSimplePatternSplitTokenizer_F25DA17402CBAA4E-001\bttc-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.pattern.TestSimplePatternSplitTokenizer_F25DA17402CBAA4E-001\bttc-001:
 java.nio.file.NoSuchFileException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.pattern.TestSimplePatternSplitTokenizer_F25DA17402CBAA4E-001\bttc-001

at __randomizedtesting.SeedInfo.seed([F25DA17402CBAA4E]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestHardLinkCopyDirectoryWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001\tempDir-010:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001\tempDir-010

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001\tempDir-010:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001\tempDir-010
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\misc\test\J0\temp\lucene.store.TestHardLinkCopyDirectoryWrapper_C8EA7219E1EE29DC-001

at __randomizedtesting.SeedInfo.seed([C8EA7219E1EE29DC]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: Code Reviews

2018-02-28 Thread Tomas Fernandez Lobbe

> Like Dawid I hope we won't add strict requirements to get changes reviewed 
> before merging but I do agree with the general sentiment that reviews are 
> helpful and improve code quality.
This seems to be what the majority thinks and I see the point, I’m concerned of 
this myself. I’m just not sure how to encourage people to submit for review and 
review other peoples more since the option is there now and is not very 
frequently used. I’m open to suggestions if anyone has ideas. 

> I really appreciate getting feedback on patches that I upload, including 
> negative feedback and I don't mind being pinged on issues if anyone thinks I 
> might have valuable feedback to give.
Exactly, same here. The times I got my patches reviewed and I got very valuable 
feedback, including someone fixing something broken in my patch.

I encourage people to go and review some random commits and see if they could 
have given any valuable feedback. Someone could tell me “you can go, review, 
and create a new Jira with your proposed changes”, but that doesn’t happen 
usually, so back to my point.


> On Feb 28, 2018, at 5:11 PM, Adrien Grand  wrote:
> 
> Like Dawid I hope we won't add strict requirements to get changes reviewed 
> before merging but I do agree with the general sentiment that reviews are 
> helpful and improve code quality. I really appreciate getting feedback on 
> patches that I upload, including negative feedback and I don't mind being 
> pinged on issues if anyone thinks I might have valuable feedback to give.
> 
> I didn't know Solr had a CTR policy. I understand CTR and RTC have pros and 
> cons but since there seems to be agreement that we want more changes to be 
> reviewed I think RTC is better at encouraging a review culture: as a reviewer 
> it's easier to recommend that the change should be done in a totally 
> different way if that is what you think, and you also feel more useful since 
> someone considered that the change needs your pair of eyes before being 
> merged.
> 
> Le mer. 28 févr. 2018 à 21:07, Cassandra Targett  > a écrit :
> On Wed, Feb 28, 2018 at 1:58 PM, Shawn Heisey  > wrote:
> 
> I notice in ZK issues that projects associated with Hadoop have an
> *automatic* machine-generated QA check whenever a patch is submitted on
> those projects.  This obviously is not the same as a real review by a
> person, but the info it outputs seems useful.
> 
> 
> 
> This is what SOLR-10912 intends to achieve. 
> 



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 161 - Still Failing

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/161/

No tests ran.

Build Log:
[...truncated 28782 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (29.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.0-src.tgz...
   [smoker] 31.7 MB in 0.03 sec (1175.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.tgz...
   [smoker] 73.2 MB in 0.06 sec (1191.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.0.zip...
   [smoker] 83.8 MB in 0.08 sec (1016.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6290 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (41.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.0-src.tgz...
   [smoker] 54.1 MB in 1.11 sec (48.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.tgz...
   [smoker] 151.0 MB in 2.37 sec (63.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.0.zip...
   [smoker] 152.0 MB in 1.53 sec (99.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.0-java8
   [smoker] 

[JENKINS] Lucene-Solr-BadApples-master-Linux (32bit/jdk1.8.0_162) - Build # 1 - Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/1/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseConcMarkSweepGC

29 tests failed.
FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:37847_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:37847_solr
at 
__randomizedtesting.SeedInfo.seed([4D250634C4C6BBE0:C57139EE6A3AD618]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:100)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ZkControllerTest.testPublishAndWaitForDownStates

Error Message:
The ZkController.publishAndWaitForDownStates should have timed out but it didn't

Stack Trace:
java.lang.AssertionError: The ZkController.publishAndWaitForDownStates should 
have timed out 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7199 - Still Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7199/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

15 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\4.7.2-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\4.7.2-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\4.7.2-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\4.7.2-cfs-001

at __randomizedtesting.SeedInfo.seed([734A54EB0EEF72D6]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\3.0.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\3.0.0-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\3.0.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_734A54EB0EEF72D6-001\3.0.0-nocfs-001

at __randomizedtesting.SeedInfo.seed([734A54EB0EEF72D6]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.lucene.replicator.IndexReplicationClientTest.testConsistencyOnExceptions

Error Message:
Captured an uncaught exception in thread: Thread[id=86, 
name=ReplicationThread-index, 

[JENKINS] Lucene-Solr-repro - Build # 166 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/166/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/1/consoleText

[repro] Revision: ef989124f345af46a905d1196bc589ef37b221c9

[repro] Repro line:  ant test  -Dtestcase=AutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=5A5B8BF221146591 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mk 
-Dtests.timezone=Pacific/Port_Moresby -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=5A5B8BF221146591 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-CS -Dtests.timezone=Europe/Ljubljana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=SSLMigrationTest 
-Dtests.seed=5A5B8BF221146591 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ko-KR -Dtests.timezone=Etc/GMT+4 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexFetchOnMasterRestart -Dtests.seed=5A5B8BF221146591 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=lv-LV -Dtests.timezone=America/Port-au-Prince 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ZkControllerTest 
-Dtests.method=testPublishAndWaitForDownStates -Dtests.seed=5A5B8BF221146591 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=sr-Latn-RS -Dtests.timezone=America/Blanc-Sablon 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestJmxIntegration 
-Dtests.method=testJmxOnCoreReload -Dtests.seed=5A5B8BF221146591 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=bg-BG -Dtests.timezone=Canada/Eastern -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLTRReRankingPipeline 
-Dtests.method=testDifferentTopN -Dtests.seed=70582F4F052F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=en 
-Dtests.timezone=Pacific/Saipan -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
60984536b068d80fbb76190544c73fd245233154
[repro] git fetch
[repro] git checkout ef989124f345af46a905d1196bc589ef37b221c9

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro]   TestReplicationHandler
[repro]   AutoAddReplicasIntegrationTest
[repro]   ZkControllerTest
[repro]   TestJmxIntegration
[repro]   SSLMigrationTest
[repro] ant compile-test

[...truncated 2563 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.seed=70582F4F052F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=en -Dtests.timezone=Pacific/Saipan 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 135 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1329 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=30 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest|*.TestReplicationHandler|*.AutoAddReplicasIntegrationTest|*.ZkControllerTest|*.TestJmxIntegration|*.SSLMigrationTest"
 -Dtests.showOutput=onerror  -Dtests.seed=5A5B8BF221146591 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sr-CS 
-Dtests.timezone=Europe/Ljubljana -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 61494 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest
[repro]   3/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro]   5/5 failed: org.apache.solr.cloud.SSLMigrationTest
[repro]   5/5 failed: org.apache.solr.cloud.ZkControllerTest
[repro]   5/5 failed: org.apache.solr.core.TestJmxIntegration
[repro]   5/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   5/5 failed: org.apache.solr.ltr.TestLTRReRankingPipeline

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro]solr/core
[repro]   SSLMigrationTest
[repro]   TestJmxIntegration
[repro]   ZkControllerTest
[repro]   TestReplicationHandler
[repro] ant 

[jira] [Updated] (SOLR-12012) Replicas should skip doing recovery on startup if possible

2018-02-28 Thread Cao Manh Dat (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12012:

Summary: Replicas should skip doing recovery on startup if possible  (was: 
Replicas should skip doing recovery on startup)

> Replicas should skip doing recovery on startup if possible
> --
>
> Key: SOLR-12012
> URL: https://issues.apache.org/jira/browse/SOLR-12012
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> Right now a replica when first loaded always does recovery (except the 
> replica is the leader). This will lead to several problems, for example: 
> - leaderless if at the same time the replica is doing recovery, the current 
> leader goes down.
> - the recovery process is not necessary if the replica is already in-sync 
> with the leader
> By using term value introduced in SOLR-11702 we can skip the recovery process 
> if replica's term equals to leader's term.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man reassigned SOLR-12046:
---

Assignee: Hoss Man

I committed LUCENE-8188 -- i'll keep an eye on the jenkins failures to see if 
that makes this test start passing, or if there are other problems

> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
>[junit4]>
> 

[jira] [Resolved] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved LUCENE-8188.
--
   Resolution: Fixed
 Assignee: Hoss Man
Fix Version/s: 7.3
   master (8.0)

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8182) BoostingQuery applies the wrong boost to the query score

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381350#comment-16381350
 ] 

Hoss Man commented on LUCENE-8182:
--

FYI: [~jim.ferenczi] - it appears you commited this fix & CHANGES entry to 
branch_7x (6d712c5e4b2fc8f4ac6dfec2eac4a17386c978f0) but no CHANGES entry on 
master...

even if there is no fix needed on master, it's missing from the 7.3 section of 
CHANGES.txt on master, which is how i noticed it: it caused a conflict when 
added LUCENE-8188 to CHANGES and tried to back merge -- ideally even if no fix 
is needed on master, we should commit the CHANGES.txt entry there anyway.

> BoostingQuery applies the wrong boost to the query score
> 
>
> Key: LUCENE-8182
> URL: https://issues.apache.org/jira/browse/LUCENE-8182
> Project: Lucene - Core
>  Issue Type: Bug
>Affects Versions: 7.0, 7.1, 7.2
>Reporter: Jim Ferenczi
>Priority: Major
> Fix For: 7.3
>
> Attachments: LUCENE-8182.patch, LUCENE-8182.patch, LUCENE-8182.patch
>
>
> BoostingQuery applies the parent query boost instead of the boost set on the 
> query due to a name clash in the anonymous class created by the createWeight 
> method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Code Reviews

2018-02-28 Thread Adrien Grand
Like Dawid I hope we won't add strict requirements to get changes reviewed
before merging but I do agree with the general sentiment that reviews are
helpful and improve code quality. I really appreciate getting feedback on
patches that I upload, including negative feedback and I don't mind being
pinged on issues if anyone thinks I might have valuable feedback to give.

I didn't know Solr had a CTR policy. I understand CTR and RTC have pros and
cons but since there seems to be agreement that we want more changes to be
reviewed I think RTC is better at encouraging a review culture: as a
reviewer it's easier to recommend that the change should be done in a
totally different way if that is what you think, and you also feel more
useful since someone considered that the change needs your pair of eyes
before being merged.

Le mer. 28 févr. 2018 à 21:07, Cassandra Targett  a
écrit :

> On Wed, Feb 28, 2018 at 1:58 PM, Shawn Heisey  wrote:
>
>>
>> I notice in ZK issues that projects associated with Hadoop have an
>> *automatic* machine-generated QA check whenever a patch is submitted on
>> those projects.  This obviously is not the same as a real review by a
>> person, but the info it outputs seems useful.
>>
>>
>>
> This is what SOLR-10912 intends to achieve.
>
>


[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381343#comment-16381343
 ] 

ASF subversion and git services commented on LUCENE-8188:
-

Commit 6e79bc7d5c6b0fafe27ea732ece403dd3807d673 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6e79bc7 ]

LUCENE-8188: Fixed bugs in OpenNLPOpsFactory that were causing InputStreams 
fetched from the ResourceLoader to be leaked

(cherry picked from commit 1bf718948696e69053bd5b7177b9ed32b5f57015)

Conflicts:
lucene/CHANGES.txt


> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381341#comment-16381341
 ] 

ASF subversion and git services commented on LUCENE-8188:
-

Commit 1bf718948696e69053bd5b7177b9ed32b5f57015 in lucene-solr's branch 
refs/heads/master from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1bf7189 ]

LUCENE-8188: Fixed bugs in OpenNLPOpsFactory that were causing InputStreams 
fetched from the ResourceLoader to be leaked


> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381340#comment-16381340
 ] 

Steve Rowe commented on LUCENE-8188:


+1

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381329#comment-16381329
 ] 

Uwe Schindler commented on LUCENE-8188:
---

Those bugs should be found by resource checks in our precommit. But those 
checks not yet fail build. I think Erick and Christine are working on this.

It's hard to find those leaks, test are also not necessarily fail' because it 
depends on GC.

So static code analysis is the only way to safely find those.

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381331#comment-16381331
 ] 

Uwe Schindler commented on LUCENE-8188:
---

Hoss: looks fine. Commit that. Was not able to test because it's late.

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381324#comment-16381324
 ] 

Steve Rowe commented on SOLR-10912:
---

bq. If there's not an entry in CHANGES.txt that mentions the issue number 
(either the lucene or solr file as appropriate), that should be a -1.

I think -0 is warranted, but not -1; some committers' workflows order CHANGES 
additions after the initial commits, and non-committers rarely include CHANGES 
entries (maybe partly because committers have to change it, minimally to 
include their name).

bq. How about a -1 if a SOLR patch makes changes to lucene, or vice versa? If 
there is an entry in the appropriate CHANGES.txt file for the issue, turn that 
into a -0. That way, we have better assurance that if a commit for one part of 
the project requires changes to the other part, there will be a release note.

Some issues require changes in both places.  Is there some issue you're trying 
to address besides release noting both projects?  I ask because Solr users 
really need to pay attention to Lucene CHANGES regardless.

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, 
> SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381322#comment-16381322
 ] 

Hoss Man commented on SOLR-12046:
-

[~thetaphi]: if you get a chance, can you try out the patch in LUCENE-8188 on a 
windows machine and see if it causes 
TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory to start passing reliably 
for you?

> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
>[junit4]>
> 

[jira] [Commented] (SOLR-11960) Add collection level properties

2018-02-28 Thread Peter Rusko (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381323#comment-16381323
 ] 

Peter Rusko commented on SOLR-11960:


{quote}BTW for this issue I personally would have chosen to store collection 
properties on the state.json for the collection rather than put this somewhere 
else. Consider all the other internal properties which are already in 
state.json (e.g. replicationFactor etc.). Was this considered? Why not? Pros 
are simplicity of backup and no need to delete with collection deletion, and 
using the same watcher mechanism?
{quote}
Yes, I considered it. There are two reasons for choosing a separate json. First 
the frequency of the change is different. Collection properties would change 
way less frequently than state.json and not parsing out the properties blob on 
every state change seemed the right way to go. But more importantly, state.json 
changes should go via the overseer, which seemed to be a bit of an overkill 
here.

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch, SOLR-11960.patch, 
> SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381318#comment-16381318
 ] 

Hoss Man commented on SOLR-12046:
-

pretty sure LUCENE-8188 is the root cause here.

> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin:
>  java.nio.file.AccessDeniedException: 
> 

[jira] [Commented] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381316#comment-16381316
 ] 

Hoss Man commented on LUCENE-8188:
--

I've attached a patch which seems like the correct fix to me – it doesn't cause 
any new failures on linux, but i don't have access to a windows machine to test 
it there

[~steve_rowe] - can you please take a look?

 

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Hoss Man (JIRA)
Hoss Man created LUCENE-8188:


 Summary: OpenNLPOpsFactory leaks filehandles of models
 Key: LUCENE-8188
 URL: https://issues.apache.org/jira/browse/LUCENE-8188
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Hoss Man
 Attachments: LUCENE-8188.patch

I appears that all methods in {{OpenNLPOpsFactory}} which use a 
{{ResourceLoader}} to get an InputStream to use for building a model are not 
closing those {{InputStreams}}

This doesn't seem to negatively affect any existing {{lucene/analysis/opennlp}} 
tests, because the JVM doesn't know/care that there is a filehandle still open 
at the end of the test (is there a way to make the test complain?)  but it does 
seem to cause a Solr level test failure on windows (SOLR-12046) because the 
solr tests create a temp dir where pre-built models are copied for use, and 
when the test completes the cleanup attempts to delete those copies of the 
files but windows won't let it because they are still open.

presumably if a {{lucene/analysis/opennlp}} test also made a copy of the files 
a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8188) OpenNLPOpsFactory leaks filehandles of models

2018-02-28 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated LUCENE-8188:
-
Attachment: LUCENE-8188.patch

> OpenNLPOpsFactory leaks filehandles of models
> -
>
> Key: LUCENE-8188
> URL: https://issues.apache.org/jira/browse/LUCENE-8188
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Hoss Man
>Priority: Major
> Attachments: LUCENE-8188.patch
>
>
> I appears that all methods in {{OpenNLPOpsFactory}} which use a 
> {{ResourceLoader}} to get an InputStream to use for building a model are not 
> closing those {{InputStreams}}
> This doesn't seem to negatively affect any existing 
> {{lucene/analysis/opennlp}} tests, because the JVM doesn't know/care that 
> there is a filehandle still open at the end of the test (is there a way to 
> make the test complain?)  but it does seem to cause a Solr level test failure 
> on windows (SOLR-12046) because the solr tests create a temp dir where 
> pre-built models are copied for use, and when the test completes the cleanup 
> attempts to delete those copies of the files but windows won't let it because 
> they are still open.
> presumably if a {{lucene/analysis/opennlp}} test also made a copy of the 
> files a similar failure would be triggered -- but only on windows



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381311#comment-16381311
 ] 

Uwe Schindler commented on SOLR-12046:
--

Good analysis!

It looks like OpenNLP has a file leak and does not close it's files. Maybe a 
try-with-resources is missing.

> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin:
>  

[jira] [Updated] (SOLR-11960) Add collection level properties

2018-02-28 Thread Peter Rusko (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Rusko updated SOLR-11960:
---
Attachment: SOLR-11960.patch

> Add collection level properties
> ---
>
> Key: SOLR-11960
> URL: https://issues.apache.org/jira/browse/SOLR-11960
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Peter Rusko
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-11960.patch, SOLR-11960.patch, SOLR-11960.patch, 
> SOLR-11960.patch
>
>
> Solr has cluster properties, but no easy and extendable way of defining 
> properties that affect a single collection. Collection properties could be 
> stored in a single zookeeper node per collection, making it possible to 
> trigger zookeeper watchers for only those Solr nodes that have cores of that 
> collection.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 472 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/472/

1 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:38652/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:56146/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:38652/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:56146/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([CEF8A02D5F0546E3:643573DFE8D69333]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:991)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:307)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381263#comment-16381263
 ] 

Hoss Man commented on SOLR-12046:
-

bq. A quick glance at a single recent failure (i haven't dug in depth to others 
over history) shows that something about the test setup appears to be 
preventing the normal file cleanup from working...

Actually -- the more i look at it, the less convinced i am that it's a test 
setup problem -- we have other tests like 
TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory that also subclass 
UpdateProcessorTestBase and follow the same BeforeClass pattern of:
* {{testHome = createTempDir()}}
* copy an existing directory from {{src/test-files}} into this new temp dir
* call {{initCore(..., testHome)}}

...i think what's specifically happening here is that when the core is being 
shutdown by the test harness, something in the 
OpenNLPExtractNamedEntitiesUpdateProcessorFactory must not be closing these 3 
bin files...

{noformat}
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin
{noformat}

...which means when the test cleanup attempts to delete those files, the 
current JVM still has them open, and windows rejects the delete? (because 
that's what windows does w/attempts to delete open file handles IIRC)



> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> 

[jira] [Commented] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381238#comment-16381238
 ] 

Hoss Man commented on SOLR-12046:
-


FWIW...

If you look at a recent failure on either the master or 7x builds on 
jenkins.thetaphi.de, it will tell you that the test has been failing for a long 
time...
* https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-master-Windows/
** build #7198 says it's been failing since #7196
*** but if you look at #7195 the only reason it didn't fail then is because it 
was never run
*** a lucene level test failure caused jenkins to abort w/o running any solr 
tests
** build #7193 says it's been failing since #7056
*** #7056 / #7055 are so old there is no longer a record of them
* https://jenkins.thetaphi.de/view/Lucene-Solr/job/Lucene-Solr-7.x-Windows
** build #480 says it's been failing since #343
** #343 / #342 are sol odl there is no longer a record of them


> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows 
> build on jenkins.thetaphi.de ? 
> 
>
> Key: SOLR-12046
> URL: https://issues.apache.org/jira/browse/SOLR-12046
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
> failures rate over the past 7 days of ~5% -- but if you drill down and look 
> at the failures there is a very obvious pattern:
> * all of the failures are at the suite level
> * every failure is on jenkins.thetaphi.de
> * every failure is on Windows
> * failures are 50/50 master and branch_7x
> A quick glance at a single recent failure (i haven't dug in depth to others 
> over history) shows that something about the test setup appears to be 
> preventing the normal file cleanup from working...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
> -Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
> -Dtests.timezone=America/Monterrey -Dtests.asserts=true 
> -Dtests.file.encoding=Cp1252
>[junit4] ERROR   0.00s J1 | 
> TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
>[junit4]> Throwable #1: java.io.IOException: Could not remove the 
> following files (in the order of attempts):
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
>  java.nio.file.DirectoryNotEmptyException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
>[junit4]>
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
>  java.nio.file.AccessDeniedException: 
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
>[junit4]>
> 

[jira] [Created] (SOLR-12046) TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory fails on every windows build on jenkins.thetaphi.de ?

2018-02-28 Thread Hoss Man (JIRA)
Hoss Man created SOLR-12046:
---

 Summary: TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
fails on every windows build on jenkins.thetaphi.de ? 
 Key: SOLR-12046
 URL: https://issues.apache.org/jira/browse/SOLR-12046
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory has a fairly modest 
failures rate over the past 7 days of ~5% -- but if you drill down and look at 
the failures there is a very obvious pattern:
* all of the failures are at the suite level
* every failure is on jenkins.thetaphi.de
* every failure is on Windows
* failures are 50/50 master and branch_7x

A quick glance at a single recent failure (i haven't dug in depth to others 
over history) shows that something about the test setup appears to be 
preventing the normal file cleanup from working...

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory 
-Dtests.seed=1C7DCF1E2889E5C6 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=America/Monterrey -Dtests.asserts=true 
-Dtests.file.encoding=Cp1252
   [junit4] ERROR   0.00s J1 | 
TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory (suite) <<<
   [junit4]> Throwable #1: java.io.IOException: Could not remove the 
following files (in the order of attempts):
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-sent.bin
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-tokenizer.bin
   [junit4]>
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\contrib\solr-analysis-extras\test\J1\temp\solr.update.processor.TestOpenNLPExtractNamedEntitiesUpdateProcessorFactory_1C7DCF1E2889E5C6-001\tempDir-001\collection1\conf\en-test-ner-person.bin
   [junit4]>

[JENKINS] Lucene-Solr-repro - Build # 164 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/164/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/1/consoleText

[repro] Revision: f48fc470f665d2eda1b959ec3472cd5f711afaa0

[repro] Repro line:  ant test  -Dtestcase=ZkControllerTest 
-Dtests.method=testPublishAndWaitForDownStates -Dtests.seed=9941995501E3FEF1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ca 
-Dtests.timezone=Europe/Podgorica -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=AtomicUpdateProcessorFactoryTest 
-Dtests.method=testMultipleThreads -Dtests.seed=9941995501E3FEF1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=mt-MT -Dtests.timezone=America/Asuncion -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SSLMigrationTest 
-Dtests.seed=9941995501E3FEF1 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=da -Dtests.timezone=America/Buenos_Aires 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestIndexFetchOnMasterRestart -Dtests.seed=9941995501E3FEF1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-GR -Dtests.timezone=America/Sao_Paulo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestJmxIntegration 
-Dtests.method=testJmxOnCoreReload -Dtests.seed=9941995501E3FEF1 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-AT -Dtests.timezone=Asia/Tokyo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=StreamExpressionTest 
-Dtests.method=testClassifyStream -Dtests.seed=8D48CA70A6F02DFB 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-LU -Dtests.timezone=America/St_Kitts -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=TestLTRReRankingPipeline 
-Dtests.method=testDifferentTopN -Dtests.seed=F3E2C625FBD99241 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=de 
-Dtests.timezone=America/Louisville -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
57cfdb1a48a836569c09930b65b6159652c74107
[repro] git fetch
[repro] git checkout f48fc470f665d2eda1b959ec3472cd5f711afaa0

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   StreamExpressionTest
[repro]solr/contrib/ltr
[repro]   TestLTRReRankingPipeline
[repro]solr/core
[repro]   AtomicUpdateProcessorFactoryTest
[repro]   SSLMigrationTest
[repro]   TestJmxIntegration
[repro]   ZkControllerTest
[repro]   TestReplicationHandler
[repro] ant compile-test

[...truncated 2460 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.StreamExpressionTest" -Dtests.showOutput=onerror  
-Dtests.seed=8D48CA70A6F02DFB -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de-LU -Dtests.timezone=America/St_Kitts 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 843 lines...]
[repro] ant compile-test

[...truncated 566 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestLTRReRankingPipeline" -Dtests.showOutput=onerror  
-Dtests.seed=F3E2C625FBD99241 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=de -Dtests.timezone=America/Louisville 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 135 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1331 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 
-Dtests.class="*.AtomicUpdateProcessorFactoryTest|*.SSLMigrationTest|*.TestJmxIntegration|*.ZkControllerTest|*.TestReplicationHandler"
 -Dtests.showOutput=onerror  -Dtests.seed=9941995501E3FEF1 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=mt-MT 
-Dtests.timezone=America/Asuncion -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 59664 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest
[repro]   1/5 failed: 
org.apache.solr.update.processor.AtomicUpdateProcessorFactoryTest
[repro]   5/5 failed: org.apache.solr.cloud.SSLMigrationTest
[repro]   5/5 failed: org.apache.solr.cloud.ZkControllerTest
[repro]   5/5 failed: org.apache.solr.core.TestJmxIntegration
[repro]   5/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro]   5/5 failed: 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+43) - Build # 21555 - Still Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21555/
Java: 64bit/jdk-10-ea+43 -XX:+UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration

Error Message:
Path /autoscaling/nodeAdded/127.0.0.1:43105_solr wasn't created

Stack Trace:
java.lang.AssertionError: Path /autoscaling/nodeAdded/127.0.0.1:43105_solr 
wasn't created
at 
__randomizedtesting.SeedInfo.seed([CCD796899596EAEF:D46D1E859BA32700]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration(TriggerIntegrationTest.java:952)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all

More tests to BaApple?

2018-02-28 Thread Erick Erickson
Here's three Uwe sent me:

- 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeMarkersRegistration
- org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate
- org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

I started to collect a bunch more, but then lost them (unrecoverably).

One build came through with 10 errors or so, all of them with a
message about "could not delete files", so I'm assuming that those
tests should _not_ be BadApple'd until we understand what's up with
them a little more.

I'll collect some new BadApple candidates today/this evening and
publish the list, from there we may add some more annotations...

I'm still trying to understand where the right balance between
BadApple-ing tests and knowing which ones to dig into. The corollary
is when to _un_ BadApple tests. Hoss' reports will give us a place to
start.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Release 6.6.3

2018-02-28 Thread Steve Rowe
I’d like to make a 6.6.3 release, primarily to backport SOLR-11503, which fixes 
a problem present in 6.6.1 and 6.6.2.

I volunteer to manage the release.

I’ll evaluate other Lucene and Solr bugfixes for inclusion, but if there’s an 
issue you’d particularly like to be included, please let me know in the next 24 
hours, since I plan on cutting the first RC tomorrow, unless people need more 
time.  Shalin told me offline that a fix for SOLR-11993 (no patch currently but 
likely it’ll be trivial since it only entails adding to an exception whitelist) 
would be useful.

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 1 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/1/

8 tests failed.
FAILED:  org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN

Error Message:
expected:<1.0> but was:<0.0>

Stack Trace:
java.lang.AssertionError: expected:<1.0> but was:<0.0>
at 
__randomizedtesting.SeedInfo.seed([70582F4F052F:81F95D1F300071BD]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN(TestLTRReRankingPipeline.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SSLMigrationTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [InternalHttpClient, 
InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)

[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-02-28 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381062#comment-16381062
 ] 

Shawn Heisey commented on SOLR-10912:
-

Some thoughts:

If there's not an entry in CHANGES.txt that mentions the issue number (either 
the lucene or solr file as appropriate), that should be a -1.

How about a -1 if a SOLR patch makes changes to lucene, or vice versa?  If 
there is an entry in the appropriate CHANGES.txt file for the issue, turn that 
into a -0.  That way, we have better assurance that if a commit for one part of 
the project requires changes to the other part, there will be a release note.

I'm pretty sure that votes made by this QA mechanism wouldn't be binding, but 
it would be a good idea to achieve a +1 from it if possible, and when it's not, 
there should be a very good and well-documented reason.


> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, 
> SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381055#comment-16381055
 ] 

ASF subversion and git services commented on SOLR-11646:


Commit 1ab2f5adbac08b27b7aff484075fb4733277d4a2 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1ab2f5a ]

Ref Guide: fix line height changed in SOLR-11646 which inadvertently vertically 
stretched the left-hand nav;
 adds a new CSS rule for tabbed sections on a page with the larger line height 
and changes back left-hand nav line height


> Ref Guide: Update API examples to include v2 style examples
> ---
>
> Key: SOLR-11646
> URL: https://issues.apache.org/jira/browse/SOLR-11646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, v2 API
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
>
> The Ref Guide currently only has a single page with what might be generously 
> called an overview of the v2 API added in 6.5 
> (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual 
> APIs that support the v2 approach do not show an example of using it with the 
> v2 style. A few v2-style APIs are already used as examples, but there's 
> nothing consistent.
> With this issue I'll add API input/output examples throughout the Guide. Just 
> in terms of process, my intention is to have a series of commits to the pages 
> as I work through them so we make incremental progress. I'll start by adding 
> a list of pages/APIs to this issue so the scope of the work is clear.
> Once this is done we can figure out what to do with the V2 API page itself - 
> perhaps it gets archived and replaced with another page that describes Solr's 
> APIs overall; perhaps by then we figure out something else to do with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11646) Ref Guide: Update API examples to include v2 style examples

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381052#comment-16381052
 ] 

ASF subversion and git services commented on SOLR-11646:


Commit 57cfdb1a48a836569c09930b65b6159652c74107 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=57cfdb1 ]

Ref Guide: fix line height changed in SOLR-11646 which inadvertently vertically 
stretched the left-hand nav;
 adds a new CSS rule for tabbed sections on a page with the larger line height 
and changes back left-hand nav line height


> Ref Guide: Update API examples to include v2 style examples
> ---
>
> Key: SOLR-11646
> URL: https://issues.apache.org/jira/browse/SOLR-11646
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, v2 API
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Major
>
> The Ref Guide currently only has a single page with what might be generously 
> called an overview of the v2 API added in 6.5 
> (https://lucene.apache.org/solr/guide/v2-api.html) but most of the actual 
> APIs that support the v2 approach do not show an example of using it with the 
> v2 style. A few v2-style APIs are already used as examples, but there's 
> nothing consistent.
> With this issue I'll add API input/output examples throughout the Guide. Just 
> in terms of process, my intention is to have a series of commits to the pages 
> as I work through them so we make incremental progress. I'll start by adding 
> a list of pages/APIs to this issue so the scope of the work is clear.
> Once this is done we can figure out what to do with the V2 API page itself - 
> perhaps it gets archived and replaced with another page that describes Solr's 
> APIs overall; perhaps by then we figure out something else to do with it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21554 - Still Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21554/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:34897/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:46429/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:34897/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:46429/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([C36DC53B85B59C81:69A016C932664951]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:991)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:309)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Code Reviews

2018-02-28 Thread Cassandra Targett
On Wed, Feb 28, 2018 at 1:58 PM, Shawn Heisey  wrote:

>
> I notice in ZK issues that projects associated with Hadoop have an
> *automatic* machine-generated QA check whenever a patch is submitted on
> those projects.  This obviously is not the same as a real review by a
> person, but the info it outputs seems useful.
>
>
>
This is what SOLR-10912 intends to achieve.


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7198 - Still Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7198/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestIndexWriterOutOfFileDescriptors

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001\TestIndexWriterOutOfFileDescriptors-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001\TestIndexWriterOutOfFileDescriptors-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001\TestIndexWriterOutOfFileDescriptors-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001\TestIndexWriterOutOfFileDescriptors-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.index.TestIndexWriterOutOfFileDescriptors_820CD1E0A77A3952-001

at __randomizedtesting.SeedInfo.seed([820CD1E0A77A3952]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestFileSwitchDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_820CD1E0A77A3952-001\bar-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_820CD1E0A77A3952-001\bar-002
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_820CD1E0A77A3952-001\bar-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J1\temp\lucene.store.TestFileSwitchDirectory_820CD1E0A77A3952-001\bar-002

at __randomizedtesting.SeedInfo.seed([820CD1E0A77A3952]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381003#comment-16381003
 ] 

ASF subversion and git services commented on SOLR-11407:


Commit 448ca40721d77d8d89e6e3d892512e30c9763835 in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=448ca40 ]

SOLR-11407: Explicitly create policy violations to force non-empty plan.


> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11407) AutoscalingHistoryHandlerTest fails frequently

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16381001#comment-16381001
 ] 

ASF subversion and git services commented on SOLR-11407:


Commit b26d67e722a76a233afe51d6f18034e60caa6a6a in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b26d67e ]

SOLR-11407: Explicitly create policy violations to force non-empty plan.


> AutoscalingHistoryHandlerTest fails frequently
> --
>
> Key: SOLR-11407
> URL: https://issues.apache.org/jira/browse/SOLR-11407
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0)
>
>
> This test fails frequently on jenkins with a failed assertion (see also 
> SOLR-11378 for other failure mode):
> {code}
>[junit4] FAILURE 6.49s J2 | AutoscalingHistoryHandlerTest.testHistory <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<8> but 
> was:<6>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([164F10BB7F145FDE:7BB3B446C55CA0D9]:0)
>[junit4]>  at 
> org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:194)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-02-28 Thread Dawid Weiss (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dawid Weiss updated SOLR-12043:
---
Fix Version/s: (was: master (8.0))
   7.3

> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: 7.3
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-02-28 Thread Dawid Weiss (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380967#comment-16380967
 ] 

Dawid Weiss commented on SOLR-12043:


Thanks Cassandra!

> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380964#comment-16380964
 ] 

ASF subversion and git services commented on SOLR-12043:


Commit 1d8c1a40457fca06d71ca90e537a1371357b8bb8 in lucene-solr's branch 
refs/heads/branch_7x from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1d8c1a4 ]

SOLR-12043: Add mlt.maxdfpct to Solr's documentation


> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380966#comment-16380966
 ] 

ASF subversion and git services commented on SOLR-12043:


Commit fa7a3ce3ee7f35eea115ec5a3bdd0a0a1a71fdce in lucene-solr's branch 
refs/heads/master from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fa7a3ce ]

SOLR-12043: Add mlt.maxdfpct to Solr's documentation


> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Code Reviews

2018-02-28 Thread Shawn Heisey
On 2/28/2018 10:59 AM, Tomas Fernandez Lobbe wrote:
> In an effort to improve code quality, I’d like to suggest that we
> start requiring code review to non-trivial patches. Not sure if/how
> other open source projects are doing code reviews, but I’ve been using
> it in internal projects for many years and it’s a great way to catch
> bugs early, some of them very difficult to catch in unit tests

I *want* people to review the changes I suggest before I commit them. 
When the change is non-trivial, or has a larger impact than the patch
size would suggest, I will typically explicitly ASK for it to be
reviewed in the issue comments.  Even in cases where I don't explicitly
ask, I will usually leave the issue alone after submitting a patch to
allow time for interested parties to comment.

Sometimes I get a review.  Often I don't.

On the other side of the coin, I try to keep tabs on issues where I have
an interest, or at least have enough knowledge to comment, and look into
any suggested changes to see if they look OK to me.  There is a LOT of
Jira activity though, and it's hard to keep up with it.  I suspect that
my fellow Solr committers are in much the same situation -- they don't
understand the entirety of the codebase well enough to comment on more
than a handful of issues, and they're overwhelmed by the volume.

I'm not opposed to something formal, but I do wonder whether it might
make people hesitant to even suggest a change, much less work on it and
make commits, because they're worried that the entire idea will get shot
down during a formal review.  Also, it would increase the number of
messages that I have to wade through on a daily basis, which won't help
my participation level.

If our commit-then-review policy is causing a large number of problems,
then we should examine the situation to see whether changing it is a
good tradeoff between quality and innovation.  I don't have a good sense
about whether it is the source of major issues.

===
Off on a tangent:

I notice in ZK issues that projects associated with Hadoop have an
*automatic* machine-generated QA check whenever a patch is submitted on
those projects.  This obviously is not the same as a real review by a
person, but the info it outputs seems useful.

https://issues.apache.org/jira/browse/ZOOKEEPER-2230?focusedCommentId=15751260=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-15751260

Solr tests are not in any kind of state where such an automatic QA
process could tell whether the patch actually made tests fail.  Erick is
trying to do something about that.

Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Code Reviews

2018-02-28 Thread Dawid Weiss
> I’d like to suggest that we start requiring code review to non-trivial 
> patches.

Don't know if it has to be a strict, corporate-like rule... Most folks
over here do get the gut feeling on what's non-trivial and requires a
second pair of eyes. JIRA and patch reviews have been serving this
purpose quite all right I think, although I recall a discussion of its
advantages and disadvantages (compared to github's review system, for
example). My concern is that making it a requirement isn't really
helping anyhow in attracting people to review those patches and is
creating a problem if you want to commit something larger, yet find
nobody interested in reviewing that patch.

Don't get me wrong, I know there are open source projects that do
require sign-offs and approvals; I'm just not sure we really need it
(or that it'd change anything in a substantial way).

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 966 - Still Failing

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/966/

No tests ran.

Build Log:
[...truncated 28738 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.02 sec (16.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.03 sec (1171.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.2 MB in 0.06 sec (1184.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.7 MB in 0.07 sec (1191.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6243 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (252.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.34 sec (154.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.0 MB in 0.81 sec (185.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.0 MB in 1.03 sec (147.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1448 - Still Failing!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1448/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseParallelGC

No tests ran.

Build Log:
[...truncated 56022 lines...]
[repro] Jenkins log URL: 
https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1448/consoleText

[repro] Revision: 9b3d68843beb5e0a834d6847446a480742665805

[repro] Ant options: "-Dargs=-XX:+UseCompressedOops -XX:+UseParallelGC"
[repro] No "reproduce with" lines found; exiting.

[...truncated 8 lines...]
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[GitHub] lucene-solr issue #302: LUCENE-8126: Spatial prefix tree based on S2 geometr...

2018-02-28 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/302
  
BTW when you're finally ready to merge into master & 7.3 please add just 
one commit with all these changes; don't rebase or merge all these commits onto 
the ASF git repo.  This will keep the history clean/simple.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Code Reviews

2018-02-28 Thread David Smiley
> To add to it, I think we should also wait before merging things to the
stable branch and commit only to master in case of non-trivial patches.

Maybe sometimes; a judgement call.  It can draw out how long it takes for
issues to get to completion; making it easier to forget that an issue isn't
quite complete yet.

On Wed, Feb 28, 2018 at 2:02 PM Anshum Gupta  wrote:

> +1 to the idea of code review before committing non-trivial patches.
>
> I do however worry about the cases when someone asks for feedback but
> doesn’t hear from anyone for reasonably long durations. In such situations
> perhaps a week should be good enough time to ask for feedback and wait
> before merging the code (to master).
>
> To add to it, I think we should also wait before merging things to the
> stable branch and commit only to master in case of non-trivial patches. I
> may be mixing two things here but I feel they are related. We used to
> almost always only commit to master and wait for stuff to bake until a
> while ago but I think that’s not the practice anymore.
>
> Overall, I’m +1 on this!
>
> Anshum
>
> On Feb 28, 2018, at 23:40, David Smiley  wrote:
>
> +1 I'm comfortable with that.   And I don't think this rule should apply
> to Solr alone; it should apply to Lucene as well, even though a greater
> percentage of issues there get reviews.
>
> I think we all appreciate the value of code reviews -- no convincing of
> that needed.  The challenge this will create is actually getting one,
> especially for those of us who submit patches that don't have
> collaborators.  This goes for a chunk of my work (Lucene/Solr alike).  I
> think I'll just ask/suggest for individuals to review that are likely to
> take an interest.
>
> On Wed, Feb 28, 2018 at 12:59 PM Tomas Fernandez Lobbe 
> wrote:
>
>> In an effort to improve code quality, I’d like to suggest that we start
>> requiring code review to non-trivial patches. Not sure if/how other open
>> source projects are doing code reviews, but I’ve been using it in internal
>> projects for many years and it’s a great way to catch bugs early, some of
>> them very difficult to catch in unit tests, like “You are breaking API
>> compatibility with this change”, or “you are swallowing
>> InterruptedExceptions”, etc. It is also a great way to standardize a bit
>> more our code base and to encourage community members to review and learn
>> then code.
>> In Lucene-land, this is already a common practice but on the Solr side is
>> rare to see. It is common on Solr that committer A doesn’t know much about
>> component X, so reviewing that may sound useless, but even in that case you
>> can provide feedback on the code itself being added (and in the meantime
>> learn something about component X).
>>
>> What do people think about it?
>>
>> Regarding tools to do it, I’m open to suggestions. I really like Github
>> PRs, that now are easy to integrate with Jira and you can create PRs from
>> forks or even from two existing branches of the official repo. Also, since
>> people is really familiar with them, I expect to encourage reviewers by
>> using them.
>>
>> Tomás
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 466 - Still unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/466/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=33399, name=jetty-launcher-7743-thread-2-SendThread(127.0.0.1:53461), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)   
 2) Thread[id=33400, name=jetty-launcher-7743-thread-2-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=33399, 
name=jetty-launcher-7743-thread-2-SendThread(127.0.0.1:53461), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
   2) Thread[id=33400, name=jetty-launcher-7743-thread-2-EventThread, 
state=WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
at __randomizedtesting.SeedInfo.seed([DB2EA697A77E4AE7]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=33399, name=jetty-launcher-7743-thread-2-SendThread(127.0.0.1:53461), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at java.lang.Thread.sleep(Native Method) at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)   
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=33399, 
name=jetty-launcher-7743-thread-2-SendThread(127.0.0.1:53461), 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
at __randomizedtesting.SeedInfo.seed([DB2EA697A77E4AE7]:0)




Build Log:
[...truncated 14005 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation
   [junit4]   2> 3276387 INFO  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[DB2EA697A77E4AE7]-worker) [   
 ] o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-7.x-Solaris/solr/build/solr-core/test/J0/temp/solr.cloud.TestSolrCloudWithSecureImpersonation_DB2EA697A77E4AE7-001/init-core-data-001
   [junit4]   2> 3276388 INFO  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[DB2EA697A77E4AE7]-worker) [   
 ] o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 3276389 INFO  
(SUITE-TestSolrCloudWithSecureImpersonation-seed#[DB2EA697A77E4AE7]-worker) [   
 ] o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, 

Re: Code Reviews

2018-02-28 Thread Anshum Gupta
+1 to the idea of code review before committing non-trivial patches. 

I do however worry about the cases when someone asks for feedback but doesn’t 
hear from anyone for reasonably long durations. In such situations perhaps a 
week should be good enough time to ask for feedback and wait before merging the 
code (to master). 

To add to it, I think we should also wait before merging things to the stable 
branch and commit only to master in case of non-trivial patches. I may be 
mixing two things here but I feel they are related. We used to almost always 
only commit to master and wait for stuff to bake until a while ago but I think 
that’s not the practice anymore.

Overall, I’m +1 on this!

Anshum

> On Feb 28, 2018, at 23:40, David Smiley  wrote:
> 
> +1 I'm comfortable with that.   And I don't think this rule should apply to 
> Solr alone; it should apply to Lucene as well, even though a greater 
> percentage of issues there get reviews.
> 
> I think we all appreciate the value of code reviews -- no convincing of that 
> needed.  The challenge this will create is actually getting one, especially 
> for those of us who submit patches that don't have collaborators.  This goes 
> for a chunk of my work (Lucene/Solr alike).  I think I'll just ask/suggest 
> for individuals to review that are likely to take an interest.
> 
>> On Wed, Feb 28, 2018 at 12:59 PM Tomas Fernandez Lobbe  
>> wrote:
>> In an effort to improve code quality, I’d like to suggest that we start 
>> requiring code review to non-trivial patches. Not sure if/how other open 
>> source projects are doing code reviews, but I’ve been using it in internal 
>> projects for many years and it’s a great way to catch bugs early, some of 
>> them very difficult to catch in unit tests, like “You are breaking API 
>> compatibility with this change”, or “you are swallowing 
>> InterruptedExceptions”, etc. It is also a great way to standardize a bit 
>> more our code base and to encourage community members to review and learn 
>> then code.
>> In Lucene-land, this is already a common practice but on the Solr side is 
>> rare to see. It is common on Solr that committer A doesn’t know much about 
>> component X, so reviewing that may sound useless, but even in that case you 
>> can provide feedback on the code itself being added (and in the meantime 
>> learn something about component X).
>> 
>> What do people think about it?
>> 
>> Regarding tools to do it, I’m open to suggestions. I really like Github PRs, 
>> that now are easy to integrate with Jira and you can create PRs from forks 
>> or even from two existing branches of the official repo. Also, since people 
>> is really familiar with them, I expect to encourage reviewers by using them.
>> 
>> Tomás
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: 
> http://www.solrenterprisesearchserver.com


Re: Code Reviews

2018-02-28 Thread Joel Bernstein
Ok, so it's clear what you're proposing then. You want to change the CTR
policy. That is indeed quite a big proposal. As I mentioned I'm personally
for CTR, but it would be good to hear other peoples thoughts on this.

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Feb 28, 2018 at 1:30 PM, Tomas Fernandez Lobbe 
wrote:

> I’m not sure how CTR was put in place either, but it was done 10+ years
> ago, when Solr had less than 1/10 of the committers it has now and who
> knows how many less production deployments/users. Now Solr is a completely
> different project than back then, and what was the correct process then may
> not be the correct process now. I’m happy to trade some development speed
> for code quality.
>
> I think just saying “anyone can ask for a review” is not going to be good
> enough, this is the case right now, and it rarely happen.
>
> Tomás
>
>
> On Feb 28, 2018, at 10:17 AM, Joel Bernstein  wrote:
>
> I agree that code reviews would be a good idea. But to require code
> reviews before committing would be a big change in practice for the Solr
> committers. I'm not sure how the commit, then review policy was put in
> place or what it would mean to change that. Also I would probably
> personally vote against a change to the commit and the review policy.
>
> But, I would be open to encouraging a culture of code review like there is
> in Lucene.
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Wed, Feb 28, 2018 at 12:59 PM, Tomas Fernandez Lobbe  > wrote:
>
>> In an effort to improve code quality, I’d like to suggest that we start
>> requiring code review to non-trivial patches. Not sure if/how other open
>> source projects are doing code reviews, but I’ve been using it in internal
>> projects for many years and it’s a great way to catch bugs early, some of
>> them very difficult to catch in unit tests, like “You are breaking API
>> compatibility with this change”, or “you are swallowing
>> InterruptedExceptions”, etc. It is also a great way to standardize a bit
>> more our code base and to encourage community members to review and learn
>> then code.
>> In Lucene-land, this is already a common practice but on the Solr side is
>> rare to see. It is common on Solr that committer A doesn’t know much about
>> component X, so reviewing that may sound useless, but even in that case you
>> can provide feedback on the code itself being added (and in the meantime
>> learn something about component X).
>>
>> What do people think about it?
>>
>> Regarding tools to do it, I’m open to suggestions. I really like Github
>> PRs, that now are easy to integrate with Jira and you can create PRs from
>> forks or even from two existing branches of the official repo. Also, since
>> people is really familiar with them, I expect to encourage reviewers by
>> using them.
>>
>> Tomás
>>
>
>
>


Re: Code Reviews

2018-02-28 Thread Tomas Fernandez Lobbe
I’m not sure how CTR was put in place either, but it was done 10+ years ago, 
when Solr had less than 1/10 of the committers it has now and who knows how 
many less production deployments/users. Now Solr is a completely different 
project than back then, and what was the correct process then may not be the 
correct process now. I’m happy to trade some development speed for code quality.

I think just saying “anyone can ask for a review” is not going to be good 
enough, this is the case right now, and it rarely happen. 

Tomás

> On Feb 28, 2018, at 10:17 AM, Joel Bernstein  wrote:
> 
> I agree that code reviews would be a good idea. But to require code reviews 
> before committing would be a big change in practice for the Solr committers. 
> I'm not sure how the commit, then review policy was put in place or what it 
> would mean to change that. Also I would probably personally vote against a 
> change to the commit and the review policy.
> 
> But, I would be open to encouraging a culture of code review like there is in 
> Lucene.
> 
> Joel Bernstein
> http://joelsolr.blogspot.com/ 
> 
> On Wed, Feb 28, 2018 at 12:59 PM, Tomas Fernandez Lobbe  > wrote:
> In an effort to improve code quality, I’d like to suggest that we start 
> requiring code review to non-trivial patches. Not sure if/how other open 
> source projects are doing code reviews, but I’ve been using it in internal 
> projects for many years and it’s a great way to catch bugs early, some of 
> them very difficult to catch in unit tests, like “You are breaking API 
> compatibility with this change”, or “you are swallowing 
> InterruptedExceptions”, etc. It is also a great way to standardize a bit more 
> our code base and to encourage community members to review and learn then 
> code.
> In Lucene-land, this is already a common practice but on the Solr side is 
> rare to see. It is common on Solr that committer A doesn’t know much about 
> component X, so reviewing that may sound useless, but even in that case you 
> can provide feedback on the code itself being added (and in the meantime 
> learn something about component X).
> 
> What do people think about it?
> 
> Regarding tools to do it, I’m open to suggestions. I really like Github PRs, 
> that now are easy to integrate with Jira and you can create PRs from forks or 
> even from two existing branches of the official repo. Also, since people is 
> really familiar with them, I expect to encourage reviewers by using them.
> 
> Tomás
> 



Re: BinaryDocValues prefix bytes

2018-02-28 Thread Ryan Ernst
This is how Elasticsearch encodes binary values. The first value a vint
containing the number of values for the field. In Lucene, binary doc values
do not have a concept of "multi valued"; the data is opaque.

On Wed, Feb 28, 2018 at 8:25 AM Dominik Safaric 
wrote:

> No I'm not. The values are being stored through ElasticSearch into a
> binary doc value as a base 64 encoded string.
>
> 2018-02-28 16:00 GMT+01:00 David Smiley :
>
>> This can't be; it must be a bug.  Perhaps you are saving away the
>> BytesRef by reference across multiple invocations?  That won't work; you
>> may have to clone/copy it.
>>
>> On Wed, Feb 28, 2018 at 9:53 AM Dominik Safaric 
>> wrote:
>>
>>> Hi,
>>>
>>> I'm having an index where I'm storing a binary doc value being equal to
>>> a serialized 8 byte value. The values are consumed by a custom Query
>>> implementation, using LeafReader.getBinaryDocValues().
>>>
>>> However, what I found is the following. To each binary doc value
>>> returned by BinaryDocValues.get(docID), a sequence of two bytes of
>>> appended. In particular, at the first position it is always a byte equal to
>>> 1, whereas at the second position always a byte equal to 8. Hence, the
>>> length of the retrieved byte array is always equal to 10, and not 8 as
>>> stored.
>>>
>>> Could please someone explain why are these bytes being appended at the
>>> head of the array, where are these bytes appended and how to get the
>>> original value?
>>>
>>> Kind regards,
>>> Dominik
>>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>
>


Re: Code Reviews

2018-02-28 Thread Joel Bernstein
I agree that code reviews would be a good idea. But to require code reviews
before committing would be a big change in practice for the Solr
committers. I'm not sure how the commit, then review policy was put in
place or what it would mean to change that. Also I would probably
personally vote against a change to the commit and the review policy.

But, I would be open to encouraging a culture of code review like there is
in Lucene.

Joel Bernstein
http://joelsolr.blogspot.com/

On Wed, Feb 28, 2018 at 12:59 PM, Tomas Fernandez Lobbe 
wrote:

> In an effort to improve code quality, I’d like to suggest that we start
> requiring code review to non-trivial patches. Not sure if/how other open
> source projects are doing code reviews, but I’ve been using it in internal
> projects for many years and it’s a great way to catch bugs early, some of
> them very difficult to catch in unit tests, like “You are breaking API
> compatibility with this change”, or “you are swallowing
> InterruptedExceptions”, etc. It is also a great way to standardize a bit
> more our code base and to encourage community members to review and learn
> then code.
> In Lucene-land, this is already a common practice but on the Solr side is
> rare to see. It is common on Solr that committer A doesn’t know much about
> component X, so reviewing that may sound useless, but even in that case you
> can provide feedback on the code itself being added (and in the meantime
> learn something about component X).
>
> What do people think about it?
>
> Regarding tools to do it, I’m open to suggestions. I really like Github
> PRs, that now are easy to integrate with Jira and you can create PRs from
> forks or even from two existing branches of the official repo. Also, since
> people is really familiar with them, I expect to encourage reviewers by
> using them.
>
> Tomás
>


Re: Code Reviews

2018-02-28 Thread David Smiley
+1 I'm comfortable with that.   And I don't think this rule should apply to
Solr alone; it should apply to Lucene as well, even though a greater
percentage of issues there get reviews.

I think we all appreciate the value of code reviews -- no convincing of
that needed.  The challenge this will create is actually getting one,
especially for those of us who submit patches that don't have
collaborators.  This goes for a chunk of my work (Lucene/Solr alike).  I
think I'll just ask/suggest for individuals to review that are likely to
take an interest.

On Wed, Feb 28, 2018 at 12:59 PM Tomas Fernandez Lobbe 
wrote:

> In an effort to improve code quality, I’d like to suggest that we start
> requiring code review to non-trivial patches. Not sure if/how other open
> source projects are doing code reviews, but I’ve been using it in internal
> projects for many years and it’s a great way to catch bugs early, some of
> them very difficult to catch in unit tests, like “You are breaking API
> compatibility with this change”, or “you are swallowing
> InterruptedExceptions”, etc. It is also a great way to standardize a bit
> more our code base and to encourage community members to review and learn
> then code.
> In Lucene-land, this is already a common practice but on the Solr side is
> rare to see. It is common on Solr that committer A doesn’t know much about
> component X, so reviewing that may sound useless, but even in that case you
> can provide feedback on the code itself being added (and in the meantime
> learn something about component X).
>
> What do people think about it?
>
> Regarding tools to do it, I’m open to suggestions. I really like Github
> PRs, that now are easy to integrate with Jira and you can create PRs from
> forks or even from two existing branches of the official repo. Also, since
> people is really familiar with them, I expect to encourage reviewers by
> using them.
>
> Tomás
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-02-28 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380775#comment-16380775
 ] 

Allen Wittenauer commented on SOLR-10912:
-

Github PR support is sort of there.  

test-patch does. It can take either a github PR directly on the command line or 
passed via a JIRA.  If it gets told to test a JIRA that references a github PR, 
it will defer to the PR as the source of the patch.  In other words, if a JIRA 
issue references a github PR and has a patch attached, it will use the github 
PR and ignore the attachments.

However!

The job on Jenkins that feeds test-patch is *NOT* github aware.  The original 
version was built before github integration existed.  To make matters worse, 
that code was locked away in a repository no one really had access to modify.  
As of a month or so ago, that code is now part of Apache Yetus ( 
https://github.com/apache/yetus/blob/master/precommit/jenkins/jenkins-admin.py 
), so there is an opportunity for us to fix this problem and add better 
asf<->github integration.

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, 
> SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 1 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/1/

8 tests failed.
FAILED:  org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN

Error Message:
expected:<1.0> but was:<0.0>

Stack Trace:
java.lang.AssertionError: expected:<1.0> but was:<0.0>
at 
__randomizedtesting.SeedInfo.seed([F3E2C625FBD99241:243B475CE6258D3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.solr.ltr.TestLTRReRankingPipeline.testDifferentTopN(TestLTRReRankingPipeline.java:255)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.SSLMigrationTest

Error Message:
ObjectTracker found 2 object(s) that were not released!!! [InternalHttpClient, 
InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:289)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:298)
  

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21553 - Still Unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21553/
Java: 32bit/jdk1.8.0_162 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate

Error Message:
Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 
AKST 2008)

Stack Trace:
java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
at 
__randomizedtesting.SeedInfo.seed([81A22712E32F:CBBB5F27988977BA]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 21007 lines...]
   [junit4] Suite: org.apache.solr.handler.extraction.TestExtractionDateUtil
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
-Dtests.seed=81A22712E32F -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=nl-BE -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 

Code Reviews

2018-02-28 Thread Tomas Fernandez Lobbe
In an effort to improve code quality, I’d like to suggest that we start 
requiring code review to non-trivial patches. Not sure if/how other open source 
projects are doing code reviews, but I’ve been using it in internal projects 
for many years and it’s a great way to catch bugs early, some of them very 
difficult to catch in unit tests, like “You are breaking API compatibility with 
this change”, or “you are swallowing InterruptedExceptions”, etc. It is also a 
great way to standardize a bit more our code base and to encourage community 
members to review and learn then code.
In Lucene-land, this is already a common practice but on the Solr side is rare 
to see. It is common on Solr that committer A doesn’t know much about component 
X, so reviewing that may sound useless, but even in that case you can provide 
feedback on the code itself being added (and in the meantime learn something 
about component X).

What do people think about it?

Regarding tools to do it, I’m open to suggestions. I really like Github PRs, 
that now are easy to integrate with Jira and you can create PRs from forks or 
even from two existing branches of the official repo. Also, since people is 
really familiar with them, I expect to encourage reviewers by using them.

Tomás

[jira] [Commented] (SOLR-11947) 7.3 Streaming Expression Documentation

2018-02-28 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380757#comment-16380757
 ] 

Joel Bernstein commented on SOLR-11947:
---

Added work in progression, including new user guide pages.

> 7.3 Streaming Expression Documentation
> --
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11947) 7.3 Streaming Expression Documentation

2018-02-28 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11947?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11947:
--
Attachment: SOLR-11947.patch

> 7.3 Streaming Expression Documentation
> --
>
> Key: SOLR-11947
> URL: https://issues.apache.org/jira/browse/SOLR-11947
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11947.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11769) Sorting performance degrades when useFilterForSortedQuery is enabled and there is no filter query specified

2018-02-28 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-11769.
-
   Resolution: Fixed
Fix Version/s: 7.3

Thanks for reporting this.

> Sorting performance degrades when useFilterForSortedQuery is enabled and 
> there is no filter query specified
> ---
>
> Key: SOLR-11769
> URL: https://issues.apache.org/jira/browse/SOLR-11769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 4.10.4
> Environment: OS: macOS Sierra (version 10.12.4)
> Memory: 16GB
> CPU: 2.9 GHz Intel Core i7
> Java Version: 1.8
>Reporter: Betim Deva
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Fix For: 7.3
>
> Attachments: SOLR-11769_Optimize_MatchAllDocsQuery_more.patch
>
>
> The performance of sorting degrades significantly when the 
> {{useFilterForSortedQuery}} is enabled, and there's no filter query specified.
> *Steps to Reproduce:*
> 1. Set {{useFilterForSortedQuery=true}} in {{solrconfig.xml}}
> 2. Run a  query to match and return a single document. Also add sorting
> - Example {{/select?q=foo:123=bar+desc}}
> Having a large index (> 10 million documents), this yields to a slow response 
> (a few hundreds of milliseconds on average) even when the resulting set 
> consists of a single document.
> *Observation 1:*
> - Disabling {{useFilterForSortedQuery}} improves the performance to < 1ms
> *Observation 2:*
> - Removing the {{sort}} improves the performance to < 1ms
> *Observation 3:*
> - Keeping the {{sort}}, and adding any filter query (such as {{fq=\*:\*}}) 
> improves the performance to < 1 ms.
> After profiling 
> [SolrIndexSearcher.java|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java;h=9ee5199bdf7511c70f2cc616c123292c97d36b5b;hb=HEAD#l1400]
>  found that the bottleneck is on 
> {{DocSet bigFilt = getDocSet(cmd.getFilterList());}} 
> when {{cmd.getFilterList())}} is passed in as {{null}}. This is making 
> {{getDocSet()}} function collect document ids every single time it is called 
> without any caching.
> {code:java}
> 1394 if (useFilterCache) {
> 1395   // now actually use the filter cache.
> 1396   // for large filters that match few documents, this may be
> 1397   // slower than simply re-executing the query.
> 1398   if (out.docSet == null) {
> 1399 out.docSet = getDocSet(cmd.getQuery(), cmd.getFilter());
> 1400 DocSet bigFilt = getDocSet(cmd.getFilterList());
> 1401 if (bigFilt != null) out.docSet = 
> out.docSet.intersection(bigFilt);
> 1402   }
> 1403   // todo: there could be a sortDocSet that could take a list of
> 1404   // the filters instead of anding them first...
> 1405   // perhaps there should be a multi-docset-iterator
> 1406   sortDocSet(qr, cmd);
> 1407 }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11769) Sorting performance degrades when useFilterForSortedQuery is enabled and there is no filter query specified

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380721#comment-16380721
 ] 

ASF subversion and git services commented on SOLR-11769:


Commit 9b3d68843beb5e0a834d6847446a480742665805 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b3d688 ]

SOLR-11769: optimize useFilterForSortedQuery=true when no filter queries

(cherry picked from commit ef98912)


> Sorting performance degrades when useFilterForSortedQuery is enabled and 
> there is no filter query specified
> ---
>
> Key: SOLR-11769
> URL: https://issues.apache.org/jira/browse/SOLR-11769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 4.10.4
> Environment: OS: macOS Sierra (version 10.12.4)
> Memory: 16GB
> CPU: 2.9 GHz Intel Core i7
> Java Version: 1.8
>Reporter: Betim Deva
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Attachments: SOLR-11769_Optimize_MatchAllDocsQuery_more.patch
>
>
> The performance of sorting degrades significantly when the 
> {{useFilterForSortedQuery}} is enabled, and there's no filter query specified.
> *Steps to Reproduce:*
> 1. Set {{useFilterForSortedQuery=true}} in {{solrconfig.xml}}
> 2. Run a  query to match and return a single document. Also add sorting
> - Example {{/select?q=foo:123=bar+desc}}
> Having a large index (> 10 million documents), this yields to a slow response 
> (a few hundreds of milliseconds on average) even when the resulting set 
> consists of a single document.
> *Observation 1:*
> - Disabling {{useFilterForSortedQuery}} improves the performance to < 1ms
> *Observation 2:*
> - Removing the {{sort}} improves the performance to < 1ms
> *Observation 3:*
> - Keeping the {{sort}}, and adding any filter query (such as {{fq=\*:\*}}) 
> improves the performance to < 1 ms.
> After profiling 
> [SolrIndexSearcher.java|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java;h=9ee5199bdf7511c70f2cc616c123292c97d36b5b;hb=HEAD#l1400]
>  found that the bottleneck is on 
> {{DocSet bigFilt = getDocSet(cmd.getFilterList());}} 
> when {{cmd.getFilterList())}} is passed in as {{null}}. This is making 
> {{getDocSet()}} function collect document ids every single time it is called 
> without any caching.
> {code:java}
> 1394 if (useFilterCache) {
> 1395   // now actually use the filter cache.
> 1396   // for large filters that match few documents, this may be
> 1397   // slower than simply re-executing the query.
> 1398   if (out.docSet == null) {
> 1399 out.docSet = getDocSet(cmd.getQuery(), cmd.getFilter());
> 1400 DocSet bigFilt = getDocSet(cmd.getFilterList());
> 1401 if (bigFilt != null) out.docSet = 
> out.docSet.intersection(bigFilt);
> 1402   }
> 1403   // todo: there could be a sortDocSet that could take a list of
> 1404   // the filters instead of anding them first...
> 1405   // perhaps there should be a multi-docset-iterator
> 1406   sortDocSet(qr, cmd);
> 1407 }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11769) Sorting performance degrades when useFilterForSortedQuery is enabled and there is no filter query specified

2018-02-28 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380719#comment-16380719
 ] 

ASF subversion and git services commented on SOLR-11769:


Commit ef989124f345af46a905d1196bc589ef37b221c9 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ef98912 ]

SOLR-11769: optimize useFilterForSortedQuery=true when no filter queries


> Sorting performance degrades when useFilterForSortedQuery is enabled and 
> there is no filter query specified
> ---
>
> Key: SOLR-11769
> URL: https://issues.apache.org/jira/browse/SOLR-11769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 4.10.4
> Environment: OS: macOS Sierra (version 10.12.4)
> Memory: 16GB
> CPU: 2.9 GHz Intel Core i7
> Java Version: 1.8
>Reporter: Betim Deva
>Assignee: David Smiley
>Priority: Major
>  Labels: performance
> Attachments: SOLR-11769_Optimize_MatchAllDocsQuery_more.patch
>
>
> The performance of sorting degrades significantly when the 
> {{useFilterForSortedQuery}} is enabled, and there's no filter query specified.
> *Steps to Reproduce:*
> 1. Set {{useFilterForSortedQuery=true}} in {{solrconfig.xml}}
> 2. Run a  query to match and return a single document. Also add sorting
> - Example {{/select?q=foo:123=bar+desc}}
> Having a large index (> 10 million documents), this yields to a slow response 
> (a few hundreds of milliseconds on average) even when the resulting set 
> consists of a single document.
> *Observation 1:*
> - Disabling {{useFilterForSortedQuery}} improves the performance to < 1ms
> *Observation 2:*
> - Removing the {{sort}} improves the performance to < 1ms
> *Observation 3:*
> - Keeping the {{sort}}, and adding any filter query (such as {{fq=\*:\*}}) 
> improves the performance to < 1 ms.
> After profiling 
> [SolrIndexSearcher.java|https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=solr/core/src/java/org/apache/solr/search/SolrIndexSearcher.java;h=9ee5199bdf7511c70f2cc616c123292c97d36b5b;hb=HEAD#l1400]
>  found that the bottleneck is on 
> {{DocSet bigFilt = getDocSet(cmd.getFilterList());}} 
> when {{cmd.getFilterList())}} is passed in as {{null}}. This is making 
> {{getDocSet()}} function collect document ids every single time it is called 
> without any caching.
> {code:java}
> 1394 if (useFilterCache) {
> 1395   // now actually use the filter cache.
> 1396   // for large filters that match few documents, this may be
> 1397   // slower than simply re-executing the query.
> 1398   if (out.docSet == null) {
> 1399 out.docSet = getDocSet(cmd.getQuery(), cmd.getFilter());
> 1400 DocSet bigFilt = getDocSet(cmd.getFilterList());
> 1401 if (bigFilt != null) out.docSet = 
> out.docSet.intersection(bigFilt);
> 1402   }
> 1403   // todo: there could be a sortDocSet that could take a list of
> 1404   // the filters instead of anding them first...
> 1405   // perhaps there should be a multi-docset-iterator
> 1406   sortDocSet(qr, cmd);
> 1407 }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1704 - Failure!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1704/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

No tests ran.

Build Log:
[...truncated 13359 lines...]
   [junit4] Suite: org.apache.solr.TestDistributedSearch
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_21DE6392D67EB465-001/init-core-data-001
   [junit4]   2> 2824176 WARN  
(SUITE-TestDistributedSearch-seed#[21DE6392D67EB465]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=381 numCloses=381
   [junit4]   2> 2824177 INFO  
(SUITE-TestDistributedSearch-seed#[21DE6392D67EB465]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 2824178 INFO  
(SUITE-TestDistributedSearch-seed#[21DE6392D67EB465]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.SolrTestCaseJ4$SuppressSSL(bugUrl=https://issues.apache.org/jira/browse/SOLR-9061)
   [junit4]   2> 2824178 INFO  
(SUITE-TestDistributedSearch-seed#[21DE6392D67EB465]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 2824179 INFO  
(SUITE-TestDistributedSearch-seed#[21DE6392D67EB465]-worker) [] 
o.a.s.BaseDistributedSearchTestCase Setting hostContext system property: /
   [junit4]   2> 2824587 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_21DE6392D67EB465-001/tempDir-001/control/cores/collection1
   [junit4]   2> 2824589 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] o.e.j.s.Server 
jetty-9.4.8.v20171121, build timestamp: 2017-11-21T17:27:37-04:00, git hash: 
82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 2824613 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 2824613 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 2824613 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.e.j.s.session Scavenging every 66ms
   [junit4]   2> 2824613 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@49391fd8{/,null,AVAILABLE}
   [junit4]   2> 2824615 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@3e1ae821{HTTP/1.1,[http/1.1]}{127.0.0.1:49055}
   [junit4]   2> 2824615 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] o.e.j.s.Server 
Started @2828909ms
   [junit4]   2> 2824615 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/, hostPort=49055, 
coreRootDirectory=/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_21DE6392D67EB465-001/tempDir-001/control/cores}
   [junit4]   2> 2824615 ERROR 
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 2824616 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.s.SolrDispatchFilter  ___  _   Welcome to Apache Solr™ version 
8.0.0
   [junit4]   2> 2824616 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.s.SolrDispatchFilter / __| ___| |_ _   Starting in standalone mode on 
port null
   [junit4]   2> 2824616 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.s.SolrDispatchFilter \__ \/ _ \ | '_|  Install dir: null
   [junit4]   2> 2824616 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.s.SolrDispatchFilter |___/\___/_|_|Start time: 
2018-02-28T16:34:29.045Z
   [junit4]   2> 2824616 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.c.SolrXmlConfig Loading container configuration from 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.TestDistributedSearch_21DE6392D67EB465-001/tempDir-001/control/solr.xml
   [junit4]   2> 2824620 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.c.SolrXmlConfig Configuration parameter autoReplicaFailoverWorkLoopDelay 
is ignored
   [junit4]   2> 2824620 INFO  
(TEST-TestDistributedSearch.test-seed#[21DE6392D67EB465]) [] 
o.a.s.c.SolrXmlConfig Configuration parameter 
autoReplicaFailoverBadNodeExpiration 

Re: BinaryDocValues prefix bytes

2018-02-28 Thread Dominik Safaric
No I'm not. The values are being stored through ElasticSearch into a binary
doc value as a base 64 encoded string.

2018-02-28 16:00 GMT+01:00 David Smiley :

> This can't be; it must be a bug.  Perhaps you are saving away the BytesRef
> by reference across multiple invocations?  That won't work; you may have to
> clone/copy it.
>
> On Wed, Feb 28, 2018 at 9:53 AM Dominik Safaric 
> wrote:
>
>> Hi,
>>
>> I'm having an index where I'm storing a binary doc value being equal to a
>> serialized 8 byte value. The values are consumed by a custom Query
>> implementation, using LeafReader.getBinaryDocValues().
>>
>> However, what I found is the following. To each binary doc value returned
>> by BinaryDocValues.get(docID), a sequence of two bytes of appended. In
>> particular, at the first position it is always a byte equal to 1, whereas
>> at the second position always a byte equal to 8. Hence, the length of the
>> retrieved byte array is always equal to 10, and not 8 as stored.
>>
>> Could please someone explain why are these bytes being appended at the
>> head of the array, where are these bytes appended and how to get the
>> original value?
>>
>> Kind regards,
>> Dominik
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book: http://www.
> solrenterprisesearchserver.com
>


[JENKINS] Lucene-Solr-repro - Build # 161 - Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/161/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/161/consoleText

[repro] Revision: 601c7350ce459f60b7e9fea4dfa46793f254f7c8

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestReplicationHandler 
-Dtests.method=doTestReplicateAfterCoreReload -Dtests.seed=9CCB1EDD4FB4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-JO -Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPIAsyncDistributedZkTest 
-Dtests.method=testAsyncRequests -Dtests.seed=9CCB1EDD4FB4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sq -Dtests.timezone=Asia/Thimbu -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7dba350c7a02fe603faec49227ff2672e4d8e6ae
[repro] git fetch
[repro] git checkout 601c7350ce459f60b7e9fea4dfa46793f254f7c8

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CollectionsAPIAsyncDistributedZkTest
[repro]   TestReplicationHandler
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.CollectionsAPIAsyncDistributedZkTest|*.TestReplicationHandler" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=9CCB1EDD4FB4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=sq -Dtests.timezone=Asia/Thimbu -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 1272019 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.CollectionsAPIAsyncDistributedZkTest
[repro]   5/5 failed: org.apache.solr.handler.TestReplicationHandler

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestReplicationHandler
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestReplicationHandler" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=9CCB1EDD4FB4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-JO -Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 1270358 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x:
[repro]   5/5 failed: org.apache.solr.handler.TestReplicationHandler

[repro] Re-testing 100% failures at the tip of branch_7x without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestReplicationHandler
[repro] ant compile-test

[...truncated 3310 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestReplicationHandler" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-JO -Dtests.timezone=Asia/Sakhalin -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 1312939 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of branch_7x without a seed:
[repro]   5/5 failed: org.apache.solr.handler.TestReplicationHandler
[repro] git checkout 7dba350c7a02fe603faec49227ff2672e4d8e6ae

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_162) - Build # 1447 - Still Failing!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1447/
Java: 64bit/jdk1.8.0_162 -XX:+UseCompressedOops -XX:+UseSerialGC

No tests ran.

Build Log:
[...truncated 62645 lines...]
[repro] Jenkins log URL: 
https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1447/consoleText

[repro] Revision: f48fc470f665d2eda1b959ec3472cd5f711afaa0

[repro] Ant options: "-Dargs=-XX:+UseCompressedOops -XX:+UseSerialGC"
[repro] No "reproduce with" lines found; exiting.

[...truncated 8 lines...]
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/var/lib/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-10028) SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master

2018-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380529#comment-16380529
 ] 

Steve Rowe commented on SOLR-10028:
---

Another reproducing master seed from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21552/]:

{noformat}
Checking out Revision 1485b7a4d72f5bf4123206d59716e75fb2706374 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=SegmentsInfoRequestHandlerTest 
-Dtests.method=testSegmentInfosVersion -Dtests.seed=13EBBD17E521ECA0 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ar-LY 
-Dtests.timezone=Asia/Shanghai -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] ERROR   0.01s J0 | 
SegmentsInfoRequestHandlerTest.testSegmentInfosVersion <<<
   [junit4]> Throwable #1: java.lang.RuntimeException: Exception during 
query
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([13EBBD17E521ECA0:EB3528FF9D4D3DF3]:0)
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:904)
   [junit4]>at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion(SegmentsInfoRequestHandlerTest.java:68)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]> Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=2=count(//lst[@name='segments']/lst/str[@name='version'][.='8.0.0'])
   [junit4]>xml response was: 
   [junit4]> 
   [junit4]> 00_01153832018-02-28T14:42:32.213Zflush8.0.0_10145222018-02-28T14:42:32.216Zflush8.0.0_20147132018-02-28T14:42:32.226Zflush8.0.0_30145222018-02-28T14:42:32.229Zflush8.0.0
   [junit4]> 
   [junit4]>request was:qt=/admin/segments=xml
   [junit4]>at 
org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:897)
   [junit4]>... 40 more
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): 
{name=PostingsFormat(name=Memory), id=PostingsFormat(name=Memory)}, 
docValues:{}, maxPointsInLeafNode=84, maxMBSortInHeap=7.698471144201788, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@1aa475d),
 locale=ar-LY, timezone=Asia/Shanghai
   [junit4]   2> NOTE: Linux 4.13.0-32-generic i386/Oracle Corporation 
1.8.0_162 (32-bit)/cpus=8,threads=1,free=91685872,total=526123008
{noformat}

> SegmentsInfoRequestHandlerTest.testSegmentInfosVersion fails in master
> --
>
> Key: SOLR-10028
> URL: https://issues.apache.org/jira/browse/SOLR-10028
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
>
> Failed in Jenkins: 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1092/
> It reproduces consistently in my mac also with the latest master 
> (ca50e5b61c2d8bfb703169cea2fb0ab20fd24c6b):
> {code}
> ant test  -Dtestcase=SegmentsInfoRequestHandlerTest 
> -Dtests.method=testSegmentInfosVersion -Dtests.seed=619B9D838D6F1E29 
> -Dtests.slow=true -Dtests.locale=en-AU -Dtests.timezone=America/Manaus 
> -Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1
> {code}
> There are similar failures in previous Jenkins builds since last month



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12045) Move Analytics Component from contrib to core

2018-02-28 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380497#comment-16380497
 ] 

Houston Putman edited comment on SOLR-12045 at 2/28/18 3:49 PM:


I ran into some confusion when adding the {{/analytics}} handler as an implicit 
handler. In {{ImplicitPlugins.json}} each handler has a {{useParams}} option. I 
can't tell where this option is used, so I just followed the convention of the 
other handlers and put  {{"_ANALYTICS"}} for the {{/analytics}} handler.

If I need to actually add a param set somewhere, I'd be happy to. I just don't 
know where it would go.


was (Author: houstonputman):
I ran into some confusion when adding the `/analytics` handler as an implicit 
handler. In `ImplicitPlugins.json` each handler has a `useParams` option. I 
can't tell where this option is used, so I just followed the convention of the 
other handlers and put `"_ANALYTICS"` for the `/analytics` handler.

If I need to actually add a param set somewhere, I'd be happy to. I just don't 
know where it would go.

> Move Analytics Component from contrib to core
> -
>
> Key: SOLR-12045
> URL: https://issues.apache.org/jira/browse/SOLR-12045
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Analytics Component currently lives in contrib. Since it includes no 
> external dependencies, there is no harm in moving it into core solr.
> The analytics component would be included as a default search component and 
> the analytics handler (currently only used for analytics shard requests, 
> might be transitioned to handle user requests in the future) would be 
> included as an implicit handler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12045) Move Analytics Component from contrib to core

2018-02-28 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380497#comment-16380497
 ] 

Houston Putman commented on SOLR-12045:
---

I ran into some confusion when adding the `/analytics` handler as an implicit 
handler. In `ImplicitPlugins.json` each handler has a `useParams` option. I 
can't tell where this option is used, so I just followed the convention of the 
other handlers and put `"_ANALYTICS"` for the `/analytics` handler.

If I need to actually add a param set somewhere, I'd be happy to. I just don't 
know where it would go.

> Move Analytics Component from contrib to core
> -
>
> Key: SOLR-12045
> URL: https://issues.apache.org/jira/browse/SOLR-12045
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>Priority: Major
> Fix For: master (8.0)
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The Analytics Component currently lives in contrib. Since it includes no 
> external dependencies, there is no harm in moving it into core solr.
> The analytics component would be included as a default search component and 
> the analytics handler (currently only used for analytics shard requests, 
> might be transitioned to handle user requests in the future) would be 
> included as an implicit handler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #330: SOLR-12045: Moving the Analytics Component fr...

2018-02-28 Thread HoustonPutman
GitHub user HoustonPutman opened a pull request:

https://github.com/apache/lucene-solr/pull/330

SOLR-12045: Moving the Analytics Component from contrib to core.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/HoustonPutman/lucene-solr analytics-core

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/330.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #330


commit bb1518759a651565b4eabb39bc0f2c0ebd4716c7
Author: Houston Putman 
Date:   2018-02-28T15:37:07Z

SOLR-12045: Moving the Analytics Component from contrib to core.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12045) Move Analytics Component from contrib to core

2018-02-28 Thread Houston Putman (JIRA)
Houston Putman created SOLR-12045:
-

 Summary: Move Analytics Component from contrib to core
 Key: SOLR-12045
 URL: https://issues.apache.org/jira/browse/SOLR-12045
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: master (8.0)
Reporter: Houston Putman
 Fix For: master (8.0)


The Analytics Component currently lives in contrib. Since it includes no 
external dependencies, there is no harm in moving it into core solr.

The analytics component would be included as a default search component and the 
analytics handler (currently only used for analytics shard requests, might be 
transitioned to handle user requests in the future) would be included as an 
implicit handler.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10894) Streaming expressions handling of escaped special characters bug

2018-02-28 Thread Houston Putman (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-10894:
--
Component/s: streaming expressions

> Streaming expressions handling of escaped special characters bug
> 
>
> Key: SOLR-10894
> URL: https://issues.apache.org/jira/browse/SOLR-10894
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Houston Putman
>Priority: Major
>
> Streaming expressions expect all special characters in named parameter values 
> to be singly escaped. Since queries can contain strings surrounded by double 
> quotes, double-escaping is necessary.
> Given the following query: 
> {{summary:"\"This is a summary\"\+"}}
> A streaming expression would require surrounding the query with double 
> quotes, therefore every special character in the query should be escaped: 
> {{select(collection,q="\"\\\"This is a summary\\\"\\\+\"",)}}
> Streaming expressions should unescape the strings contained within double 
> quotes, however currently they are only unescaping {{\" -> "}}. Therefore it 
> is impossible to query for text fields containing double quotes. Also other 
> special characters are not unescaped; this inconsistency causes confusion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 479 - Still Failing!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/479/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

No tests ran.

Build Log:
[...truncated 55971 lines...]
[repro] Jenkins log URL: 
https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/479/consoleText

[repro] Revision: f48fc470f665d2eda1b959ec3472cd5f711afaa0

[repro] Ant options: "-Dargs=-XX:-UseCompressedOops -XX:+UseG1GC"
[repro] No "reproduce with" lines found; exiting.

[...truncated 8 lines...]
ERROR: Step ‘Publish JUnit test result report’ failed: No test report files 
were found. Configuration error?
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any
Setting 
ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2
Setting 
ANT_1_8_2_HOME=/Users/jenkins/tools/hudson.tasks.Ant_AntInstallation/ANT_1.8.2

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-12044) Optimize MatchAllDocsQuery for DocSets more

2018-02-28 Thread David Smiley (JIRA)
David Smiley created SOLR-12044:
---

 Summary: Optimize MatchAllDocsQuery for DocSets more
 Key: SOLR-12044
 URL: https://issues.apache.org/jira/browse/SOLR-12044
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


(forking from SOLR-11769)  I see places where we could better optimize some of 
the getDocSet related methods for when we have a MatchAllDocsQuery and some 
related optimizations.  For example, if the "live docs" is instantiated, we can 
just return that.  But we have to be mindful of the semantics of some of these 
get/create DocSet methods to be clear about wether a cached result can be 
returned or not so that the caller knows not to modify it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-02-28 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380464#comment-16380464
 ] 

Steve Rowe commented on SOLR-10912:
---

bq. I'm confused what the state is right now. In JIRA I see these new workflow 
states. Do they do anything right now, or not yet pending further TODOs above? 

Not yet.  I'm working on it.

bq. Will this feature support both JIRA attached patch files as well as linked 
GitHub PRs (e.g. as seen here: SOLR-11976) ?

Attached JIRA patch files, yes.  I don't know about GitHub PRs... [~manokovacs] 
?  (Note his todo item, which sounds like the answer is yes for PRs: "Verify 
the test-patch with Github PR")

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, 
> SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Parsing exception when passed defType complexphrase vs. local parameters?

2018-02-28 Thread Dawid Weiss
Yeah, this is SOLR-11501 -- I missed this change somehow. I will
update the tests; they assume people could tweak local parameters for
a query parser. Thanks for pointing me at the right issue!

D.

On Wed, Feb 28, 2018 at 3:56 PM, David Smiley  wrote:
> Yeah this is almost certainly SOLR-11501.  I bet you have defType=edismax or
> something.  Essentially, the reason being is if you set defType=whatever
> then your query should be using that query parser and not something embedded
> into 'q' (i.e. the user shouldn't be able to change it).  So either unset
> defType if you would prefer to send 'q' with the query parser in
> local-params, or go the other way and set defType=complexphrase.  If you
> must set local-param only options (e.g. inOrder might be; not sure) then
> you'll have to go with setting it via 'q'.  Does that make sense?
>
> (BTW this is a solr-user list question).
>
> On Wed, Feb 28, 2018 at 9:39 AM Christine Poerschke (BLOOMBERG/ LONDON)
>  wrote:
>>
>> Hi Dawid,
>>
>> The symptoms you mention sound similar to the SOLR-11809 symptoms and the
>> SOLR-11501 changes are probably the 'mysterious' change you might have been
>> looking for?
>>
>> Christine
>>
>> - Original Message -
>> From: dev@lucene.apache.org
>> To: dev@lucene.apache.org
>> Cc: m...@apache.org
>> At: 02/28/18 14:06:05
>>
>> I am in the process of upgrading from Solr 6.x to 7.2.1 and one of the
>> tests does query for:
>>
>> {!complexphrase inOrder=false}"(foo1 foo2) ba*"
>>
>> This works just fine. Another test specifies the query parser using
>> defType=complexphrase and this query no longer parses:
>>
>> {!inOrder=false}"(foo1 foo2) ba*"
>>
>> Resulting in an exception on the server:
>>
>> 2018-02-28 13:55:36.749 ERROR (qtp581374081-50) [   x:proposals]
>> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
>> org.apache.solr.search.SyntaxError:
>> org.apache.lucene.queryparser.classic.ParseException: Cannot parse
>> '{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
>> 1, column 15.
>> Was expecting:
>> "TO" ...
>>
>> at
>> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:218)
>> ...
>>
>> Caused by: org.apache.solr.search.SyntaxError:
>> org.apache.lucene.queryparser.classic.ParseException: Cannot parse
>> '{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
>> 1, column 15.
>> Was expecting:
>> "TO" ...
>>
>> at
>> org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:166)
>>
>> Anybody cares to tell me why is this a difference? Last significant
>> commit to ComplexPhraseQParser was from Mikhail... but how does
>> passing the query parser make a difference here?
>>
>> Dawid
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10912) Adding automatic patch validation

2018-02-28 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380458#comment-16380458
 ] 

David Smiley commented on SOLR-10912:
-

I'm confused what the state is right now.  In JIRA I see these new workflow 
states.  Do they do anything right now, or _not yet_ pending further TODOs 
above?  Will this feature support both JIRA attached patch files as well as 
linked GitHub PRs (e.g. as seen here: SOLR-11976) ?

> Adding automatic patch validation
> -
>
> Key: SOLR-10912
> URL: https://issues.apache.org/jira/browse/SOLR-10912
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Build
>Reporter: Mano Kovacs
>Priority: Major
> Attachments: SOLR-10912.ok-patch-in-core.patch, 
> SOLR-10912.sample-patch.patch, SOLR-10912.solj-contrib-facet-error.patch
>
>
> Proposing introduction of automated patch validation, similar what Hadoop or 
> other Apache projects are using (see link). This would ensure that every 
> patch passes a certain set of criterions before getting approved. It would 
> save time for developer (faster feedback loop), save time for committers 
> (less step to do manually), and would increase quality.
> Hadoop is currently using Apache Yetus to run validations, which seems to be 
> a good direction to start. This jira could be the board of discussing the 
> preferred solution.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12043) Add mlt.maxdfpct to Solr's documentation

2018-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380438#comment-16380438
 ] 

Cassandra Targett commented on SOLR-12043:
--

+1 Dawid, doc addition looks good.

> Add mlt.maxdfpct to Solr's documentation
> 
>
> Key: SOLR-12043
> URL: https://issues.apache.org/jira/browse/SOLR-12043
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Trivial
> Fix For: master (8.0)
>
> Attachments: SOLR-12043.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7821) example films data doesn't work consistently with data-driven schema (schemaless)

2018-02-28 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380434#comment-16380434
 ] 

Cassandra Targett commented on SOLR-7821:
-

[~swapnilmmane] - the screenshot was fixed for SOLR-11893. That will be 
released when Solr 7.3 is released (which should be soon).

> example films data doesn't work consistently with data-driven schema 
> (schemaless)
> -
>
> Key: SOLR-7821
> URL: https://issues.apache.org/jira/browse/SOLR-7821
> Project: Solr
>  Issue Type: Bug
>Reporter: Timothy Potter
>Priority: Major
> Attachments: tutorial-add-field.png
>
>
> On 5.2.1, tried to index the films data into a collection
> {code}
> [~/dev/lw/tools/solr-5.2.1]$ bin/solr -cloud
> Waiting to see Solr listening on port 8983 [/]  
> Started Solr server on port 8983 (pid=98797). Happy searching!
> [~/dev/lw/tools/solr-5.2.1]$ bin/solr create -c gettingstarted -shards 2
> Connecting to ZooKeeper at localhost:9983
> Uploading 
> /Users/timpotter/dev/lw/tools/solr-5.2.1/server/solr/configsets/data_driven_schema_configs/conf
>  for config gettingstarted to ZooKeeper at localhost:9983
> Creating new collection 'gettingstarted' using command:
> http://192.168.1.2:8983/solr/admin/collections?action=CREATE=gettingstarted=2=1=2=gettingstarted
> {
>   "responseHeader":{
> "status":0,
> "QTime":2575},
>   "success":{"":{
>   "responseHeader":{
> "status":0,
> "QTime":2367},
>   "core":"gettingstarted_shard2_replica1"}}}
> [~/dev/lw/tools/solr-5.2.1]$ bin/post -c gettingstarted 
> example/films/films.json
> /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/java 
> -classpath /Users/timpotter/dev/lw/tools/solr-5.2.1/dist/solr-core-5.2.1.jar 
> -Dauto=yes -Dc=gettingstarted -Ddata=files 
> org.apache.solr.util.SimplePostTool example/films/films.json
> SimplePostTool version 5.0.0
> Posting files to [base] url 
> http://localhost:8983/solr/gettingstarted/update...
> Entering auto mode. File endings considered are 
> xml,json,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
> POSTing file films.json (application/json) to [base]
> SimplePostTool: WARNING: Solr returned an error #400 (Bad Request) for url: 
> http://localhost:8983/solr/gettingstarted/update
> SimplePostTool: WARNING: Response: 
> {"responseHeader":{"status":400,"QTime":285},"error":{"msg":"ERROR: 
> [doc=/en/quien_es_el_senor_lopez] Error adding field 'name'='¿Quién es el 
> señor López?' msg=For input string: \"¿Quién es el señor 
> López?\"","code":400}}
> SimplePostTool: WARNING: IOException while reading response: 
> java.io.IOException: Server returned HTTP response code: 400 for URL: 
> http://localhost:8983/solr/gettingstarted/update
> 1 files indexed.
> COMMITting Solr index changes to 
> http://localhost:8983/solr/gettingstarted/update...
> Time spent: 0:00:00.370
> {code}
> In the solr.log, I see:
> {code}
> ERROR - 2015-07-22 21:54:36.395; [gettingstarted shard2 core_node1 
> gettingstarted_shard2_replica1] org.apache.solr.common.SolrException; 
> org.apache.solr.common.SolrException: ERROR: 
> [doc=/en/quien_es_el_senor_lopez] Error adding field 'name'='¿Quién es el 
> señor López?' msg=For input string: "¿Quién es el señor López?"
> at 
> org.apache.solr.update.DocumentBuilder.toDocument(DocumentBuilder.java:176)
> at 
> org.apache.solr.update.AddUpdateCommand.getLuceneDocument(AddUpdateCommand.java:83)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:237)
> at 
> org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:163)
> at 
> org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:328)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:117)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
> at 
> 

Re: BinaryDocValues prefix bytes

2018-02-28 Thread David Smiley
This can't be; it must be a bug.  Perhaps you are saving away the BytesRef
by reference across multiple invocations?  That won't work; you may have to
clone/copy it.

On Wed, Feb 28, 2018 at 9:53 AM Dominik Safaric 
wrote:

> Hi,
>
> I'm having an index where I'm storing a binary doc value being equal to a
> serialized 8 byte value. The values are consumed by a custom Query
> implementation, using LeafReader.getBinaryDocValues().
>
> However, what I found is the following. To each binary doc value returned
> by BinaryDocValues.get(docID), a sequence of two bytes of appended. In
> particular, at the first position it is always a byte equal to 1, whereas
> at the second position always a byte equal to 8. Hence, the length of the
> retrieved byte array is always equal to 10, and not 8 as stored.
>
> Could please someone explain why are these bytes being appended at the
> head of the array, where are these bytes appended and how to get the
> original value?
>
> Kind regards,
> Dominik
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Comment Edited] (LUCENE-8159) Add a copy constructor in AutomatonQuery to copy directly the compiled automaton

2018-02-28 Thread Bruno Roustant (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380407#comment-16380407
 ] 

Bruno Roustant edited comment on LUCENE-8159 at 2/28/18 2:58 PM:
-

[~rcmuir] could you be a little more explicit?

Without context I don't understand why a copy constructor is bad in Java in 
general.

Do you mean you prefer a copy method?

PrefixQuery copy(String field)


was (Author: bruno.roustant):
[~rcmuir] could you be a little more explicit?

Without context I don't understand why a copy constructor is bad in Java in 
general.

> Add a copy constructor in AutomatonQuery to copy directly the compiled 
> automaton
> 
>
> Key: LUCENE-8159
> URL: https://issues.apache.org/jira/browse/LUCENE-8159
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: 
> 0001-Add-a-copy-constructor-in-AutomatonQuery-to-copy-dir.patch, 
> LUCENE-8159.patch
>
>
> When the query is composed of multiple AutomatonQuery with the same automaton 
> and which target different fields, it is much more efficient to reuse the 
> already compiled automaton by copying it directly and just changing the 
> target field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21552 - Still unstable!

2018-02-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21552/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([13EBBD17E521ECA0:EB3528FF9D4D3DF3]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:904)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion(SegmentsInfoRequestHandlerTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=2=count(//lst[@name='segments']/lst/str[@name='version'][.='8.0.0'])
xml response was: 


Re: Parsing exception when passed defType complexphrase vs. local parameters?

2018-02-28 Thread David Smiley
Yeah this is almost certainly SOLR-11501.  I bet you have defType=edismax
or something.  Essentially, the reason being is if you set defType=whatever
then your query should be using that query parser and not something
embedded into 'q' (i.e. the user shouldn't be able to change it).  So
either unset defType if you would prefer to send 'q' with the query parser
in local-params, or go the other way and set defType=complexphrase.  If you
must set local-param only options (e.g. inOrder might be; not sure) then
you'll have to go with setting it via 'q'.  Does that make sense?

(BTW this is a solr-user list question).

On Wed, Feb 28, 2018 at 9:39 AM Christine Poerschke (BLOOMBERG/ LONDON) <
cpoersc...@bloomberg.net> wrote:

> Hi Dawid,
>
> The symptoms you mention sound similar to the SOLR-11809 symptoms and the
> SOLR-11501 changes are probably the 'mysterious' change you might have been
> looking for?
>
> Christine
>
> - Original Message -
> From: dev@lucene.apache.org
> To: dev@lucene.apache.org
> Cc: m...@apache.org
> At: 02/28/18 14:06:05
>
> I am in the process of upgrading from Solr 6.x to 7.2.1 and one of the
> tests does query for:
>
> {!complexphrase inOrder=false}"(foo1 foo2) ba*"
>
> This works just fine. Another test specifies the query parser using
> defType=complexphrase and this query no longer parses:
>
> {!inOrder=false}"(foo1 foo2) ba*"
>
> Resulting in an exception on the server:
>
> 2018-02-28 13:55:36.749 ERROR (qtp581374081-50) [   x:proposals]
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
> org.apache.solr.search.SyntaxError:
> org.apache.lucene.queryparser.classic.ParseException: Cannot parse
> '{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
> 1, column 15.
> Was expecting:
> "TO" ...
>
> at
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:218)
> ...
>
> Caused by: org.apache.solr.search.SyntaxError:
> org.apache.lucene.queryparser.classic.ParseException: Cannot parse
> '{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
> 1, column 15.
> Was expecting:
> "TO" ...
>
> at
> org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:166)
>
> Anybody cares to tell me why is this a difference? Last significant
> commit to ComplexPhraseQParser was from Mikhail... but how does
> passing the query parser make a difference here?
>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


BinaryDocValues prefix bytes

2018-02-28 Thread Dominik Safaric
Hi,

I'm having an index where I'm storing a binary doc value being equal to a
serialized 8 byte value. The values are consumed by a custom Query
implementation, using LeafReader.getBinaryDocValues().

However, what I found is the following. To each binary doc value returned
by BinaryDocValues.get(docID), a sequence of two bytes of appended. In
particular, at the first position it is always a byte equal to 1, whereas
at the second position always a byte equal to 8. Hence, the length of the
retrieved byte array is always equal to 10, and not 8 as stored.

Could please someone explain why are these bytes being appended at the head
of the array, where are these bytes appended and how to get the original
value?

Kind regards,
Dominik


[jira] [Commented] (LUCENE-8159) Add a copy constructor in AutomatonQuery to copy directly the compiled automaton

2018-02-28 Thread Bruno Roustant (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16380407#comment-16380407
 ] 

Bruno Roustant commented on LUCENE-8159:


[~rcmuir] could you be a little more explicit?

Without context I don't understand why a copy constructor is bad in Java in 
general.

> Add a copy constructor in AutomatonQuery to copy directly the compiled 
> automaton
> 
>
> Key: LUCENE-8159
> URL: https://issues.apache.org/jira/browse/LUCENE-8159
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: trunk
>Reporter: Bruno Roustant
>Assignee: David Smiley
>Priority: Major
> Attachments: 
> 0001-Add-a-copy-constructor-in-AutomatonQuery-to-copy-dir.patch, 
> LUCENE-8159.patch
>
>
> When the query is composed of multiple AutomatonQuery with the same automaton 
> and which target different fields, it is much more efficient to reuse the 
> already compiled automaton by copying it directly and just changing the 
> target field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 162 - Still Unstable

2018-02-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/162/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1490/consoleText

[repro] Revision: 7dba350c7a02fe603faec49227ff2672e4d8e6ae

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=2378B1F738C0E74C -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mk -Dtests.timezone=America/Miquelon -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.seed=2378B1F738C0E74C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mk -Dtests.timezone=America/Miquelon -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
7dba350c7a02fe603faec49227ff2672e4d8e6ae
[repro] git fetch
[repro] git checkout 7dba350c7a02fe603faec49227ff2672e4d8e6ae

[...truncated 1 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsRestartWhileUpdatingTest
[repro] ant compile-test

[...truncated 3292 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsRestartWhileUpdatingTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=2378B1F738C0E74C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mk -Dtests.timezone=America/Miquelon -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 226068 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro] git checkout 7dba350c7a02fe603faec49227ff2672e4d8e6ae

[...truncated 1 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

Re:Parsing exception when passed defType complexphrase vs. local parameters?

2018-02-28 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Hi Dawid,

The symptoms you mention sound similar to the SOLR-11809 symptoms and the 
SOLR-11501 changes are probably the 'mysterious' change you might have been 
looking for?

Christine

- Original Message -
From: dev@lucene.apache.org
To: dev@lucene.apache.org
Cc: m...@apache.org
At: 02/28/18 14:06:05

I am in the process of upgrading from Solr 6.x to 7.2.1 and one of the
tests does query for:

{!complexphrase inOrder=false}"(foo1 foo2) ba*"

This works just fine. Another test specifies the query parser using
defType=complexphrase and this query no longer parses:

{!inOrder=false}"(foo1 foo2) ba*"

Resulting in an exception on the server:

2018-02-28 13:55:36.749 ERROR (qtp581374081-50) [   x:proposals]
o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException:
org.apache.solr.search.SyntaxError:
org.apache.lucene.queryparser.classic.ParseException: Cannot parse
'{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
1, column 15.
Was expecting:
"TO" ...

at 
org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:218)
...

Caused by: org.apache.solr.search.SyntaxError:
org.apache.lucene.queryparser.classic.ParseException: Cannot parse
'{!inOrder=false} "(foo1 foo2) ba*"': Encountered " "}" "} "" at line
1, column 15.
Was expecting:
"TO" ...

at 
org.apache.solr.search.ComplexPhraseQParserPlugin$ComplexPhraseQParser.parse(ComplexPhraseQParserPlugin.java:166)

Anybody cares to tell me why is this a difference? Last significant
commit to ComplexPhraseQParser was from Mikhail... but how does
passing the query parser make a difference here?

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




  1   2   >