[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread Ishan Chattopadhyaya (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281462#comment-16281462
 ] 

Ishan Chattopadhyaya commented on SOLR-11331:
-

What does your patch provide that is not already supported with "ant eclipse"?

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Linux (64bit/jdk-9.0.1) - Build # 23 - Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/23/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

10 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream

Error Message:
Could not find collection:mainCorpus1

Stack Trace:
java.lang.AssertionError: Could not find collection:mainCorpus1
at 
__randomizedtesting.SeedInfo.seed([F5ED9270A07E2F40:48FAE7699952121D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.client.solrj.io.stream.StreamExpressionTest.testParallelExecutorStream(StreamExpressionTest.java:8466)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk1.8.0_144) - Build # 4 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/4/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.snowball.TestSnowballVocab

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006\italian:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006\italian

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006\italian:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006\italian
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\analysis\common\test\J0\temp\lucene.analysis.snowball.TestSnowballVocab_ADECBEE518E5E152-001\tempDir-006

at __randomizedtesting.SeedInfo.seed([ADECBEE518E5E152]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.store.TestMockDirectoryWrapper

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\test-framework\test\J1\temp\lucene.store.TestMockDirectoryWrapper_FADEB7CB7690F317-001

at 

[jira] [Updated] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-06 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11691:
--
Attachment: SOLR-11691.patch

untested patch , isn't this better

> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, SOLR-11691.patch, repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 951 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/951/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

3 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
http://127.0.0.1:45189/solr/awhollynewcollection_0_shard3_replica_n4: 
ClusterState says we are the leader 
(http://127.0.0.1:45189/solr/awhollynewcollection_0_shard3_replica_n4), but 
locally we don't think so. Request came from null

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at http://127.0.0.1:45189/solr/awhollynewcollection_0_shard3_replica_n4: 
ClusterState says we are the leader 
(http://127.0.0.1:45189/solr/awhollynewcollection_0_shard3_replica_n4), but 
locally we don't think so. Request came from null
at 
__randomizedtesting.SeedInfo.seed([5941FF6AF6CC5B7A:11348BDEF0FF74EF]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:549)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:945)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCollectionsAPI(CollectionsAPIDistributedZkTest.java:459)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281340#comment-16281340
 ] 

David Smiley commented on SOLR-11691:
-

Thanks [~gus_heck] for the detailed bug report and thanks [~gerlowskija] for 
contributing a solution and reproducibility script!  

I guess I'm okay with the overall approach of having CreateAliasCmd do this 
JSON parsing although it feels as though it should be on the v2 side somehow 
(which I am not familiar with so can't point you at a particular class).  
[~noble.paul] do you have an opinion?

Assuming we stay the course... can we make this patch detect that it's JSON and 
if so parse it properly?  See Utils.fromJSON etc.  Your solution of stripping 
the brackets and quotes is a bit hokey.  And I'm not sure why you used 
commons-lang3 to call StringUtils when Solr has equivalents in StrUtils.


> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8080) GeoExactCircle improvement

2017-12-06 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8080:
-
Attachment: LUCENE-8080.patch

sorry about that, I regenerated the patch. 

> GeoExactCircle improvement
> --
>
> Key: LUCENE-8080
> URL: https://issues.apache.org/jira/browse/LUCENE-8080
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8080-test.patch, LUCENE-8080.patch
>
>
> Hi [~daddywri],
> Current implementation of GeoExactCircle seems to work well for planet models 
> with low flattening (~|0.025|). When flattening increase shapes start 
> becoming invalid because of the cutting angle of the circle plane which 
> results on the center of the circle ending up on the wrong side of the plane. 
> I propose a new version of GeoExactCircle that tries to overcome this problem 
> by creating a new plane for a circle sector in such cases. The new plane is 
> built built for each sector when needed by using two points from the circle 
> edge and the center of the world. The plane is such that it is built as close 
> as possible to the circle plane of the sector. Points from the circle plane 
> must not be within the new plane and the center of the circle must be within 
> the plane.
> This approach seems to work well up to planets with flattening up to around 
> ~|0.1|. I think after that the cutting angles of circle planes can be so thin 
> that the apporach is not valid. 
> Therefore I propose to add this new approach and limit the creation of such 
> circles to planet models with flattening lower than |0.1|. Probably a 
> limitation that does not affect most of the realistic cases.
> In addition this new version forces a minimum of 4 sectors in a circle. The 
> issue on LUCENE-8071 came up again for circles of any radius so we should 
> enforce it for all circles.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8080) GeoExactCircle improvement

2017-12-06 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera updated LUCENE-8080:
-
Attachment: (was: LUCENE-8080.patch)

> GeoExactCircle improvement
> --
>
> Key: LUCENE-8080
> URL: https://issues.apache.org/jira/browse/LUCENE-8080
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8080-test.patch, LUCENE-8080.patch
>
>
> Hi [~daddywri],
> Current implementation of GeoExactCircle seems to work well for planet models 
> with low flattening (~|0.025|). When flattening increase shapes start 
> becoming invalid because of the cutting angle of the circle plane which 
> results on the center of the circle ending up on the wrong side of the plane. 
> I propose a new version of GeoExactCircle that tries to overcome this problem 
> by creating a new plane for a circle sector in such cases. The new plane is 
> built built for each sector when needed by using two points from the circle 
> edge and the center of the world. The plane is such that it is built as close 
> as possible to the circle plane of the sector. Points from the circle plane 
> must not be within the new plane and the center of the circle must be within 
> the plane.
> This approach seems to work well up to planets with flattening up to around 
> ~|0.1|. I think after that the cutting angles of circle planes can be so thin 
> that the apporach is not valid. 
> Therefore I propose to add this new approach and limit the creation of such 
> circles to planet models with flattening lower than |0.1|. Probably a 
> limitation that does not affect most of the realistic cases.
> In addition this new version forces a minimum of 4 sectors in a circle. The 
> issue on LUCENE-8071 came up again for circles of any radius so we should 
> enforce it for all circles.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1434 - Still unstable

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1434/

6 tests failed.
FAILED:  org.apache.lucene.spatial3d.TestGeo3DPoint.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([280F7BA770A07602]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.spatial3d.TestGeo3DPoint

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([280F7BA770A07602]:0)


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=8519, name=Thread-1698, 
state=RUNNABLE, group=TGRP-FullSolrCloudDistribCmdsTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8519, name=Thread-1698, state=RUNNABLE, 
group=TGRP-FullSolrCloudDistribCmdsTest]
Caused by: java.lang.RuntimeException: 
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:38282/bt/y/collection2_shard2_replica_n6
at __randomizedtesting.SeedInfo.seed([41CB6886B48C3537]:0)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:636)
Caused by: org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: 
http://127.0.0.1:38282/bt/y/collection2_shard2_replica_n6
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:565)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1012)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest$1IndexThread.run(FullSolrCloudDistribCmdsTest.java:633)
Caused by: org.apache.solr.client.solrj.SolrServerException: Timeout occured 
while waiting response from server at: 
http://127.0.0.1:38282/bt/y/collection2_shard2_replica_n6
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:559)
... 6 more
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:171)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_144) - Build # 7043 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7043/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_4B85842DFE00486E-001\4.7.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_4B85842DFE00486E-001\4.7.1-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_4B85842DFE00486E-001\4.7.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J1\temp\lucene.index.TestBackwardsCompatibility_4B85842DFE00486E-001\4.7.1-cfs-001

at __randomizedtesting.SeedInfo.seed([4B85842DFE00486E]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.FacetPivotSmallTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001\tlog

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001\tlog:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001\tlog
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.FacetPivotSmallTest_2A6D2C8D76307B9A-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([2A6D2C8D76307B9A]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-06 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11691:
---

Assignee: David Smiley

> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
>Assignee: David Smiley
> Attachments: SOLR-11691.patch, repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-10-ea+32) - Build # 21045 - Failure!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21045/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.DistribJoinFromCollectionTest

Error Message:
Error from server at http://127.0.0.1:39179/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:39179/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([7E5006EEDD43AF61]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.DistribJoinFromCollectionTest.setupCluster(DistribJoinFromCollectionTest.java:88)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12218 lines...]
   [junit4] Suite: org.apache.solr.cloud.DistribJoinFromCollectionTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.DistribJoinFromCollectionTest_7E5006EEDD43AF61-001/init-core-data-001
   [junit4]   2> 1094230 WARN  
(SUITE-DistribJoinFromCollectionTest-seed#[7E5006EEDD43AF61]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=15 numCloses=15
   [junit4]   2> 1094230 INFO  
(SUITE-DistribJoinFromCollectionTest-seed#[7E5006EEDD43AF61]-worker) [] 
o.a.s.SolrTestCaseJ4 Using TrieFields (NUMERIC_POINTS_SYSPROP=false) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 1094231 INFO  
(SUITE-DistribJoinFromCollectionTest-seed#[7E5006EEDD43AF61]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 

[JENKINS] Lucene-Solr-Tests-master - Build # 2209 - Still unstable

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2209/

8 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testSearchRate

Error Message:
{srt=[TestEvent{timestamp=151260766395300, config={   
"trigger":"search_rate_trigger",   "afterAction":[],   "stage":[ "FAILED",  
   "SUCCEEDED"],   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[]}, stage=SUCCEEDED, actionName='null', event={   
"id":"14fddca67856f000T7tbbg5gq58duo66x9cjskvxge",   
"source":"search_rate_trigger",   "eventTime":151260765740800,   
"eventType":"SEARCHRATE",   "properties":{ "node":{   
"127.0.0.1:34133_solr":1.6470850586359402,   
"127.0.0.1:50633_solr":1.4392005366721807}, "replica":[   
"{\"core_node4\":{\n\"base_url\":\"https://127.0.0.1:34133/solr\",\n
\"node_name\":\"127.0.0.1:34133_solr\",\n
\"core\":\"collection1_shard1_replica_n3\",\n\"state\":\"active\",\n
\"type\":\"NRT\",\n\"rate\":1.6470850586359402}}",   
"{\"core_node2\":{\n\"core\":\"collection1_shard1_replica_n1\",\n
\"leader\":\"true\",\n\"rate\":1.4392005366721807,\n
\"base_url\":\"https://127.0.0.1:50633/solr\",\n
\"node_name\":\"127.0.0.1:50633_solr\",\n\"state\":\"active\",\n
\"type\":\"NRT\"}}"], "collection":{"collection1":3.086285595308121}, 
"shard":{"collection1":{"shard1":3.086285595308121}}, 
"_enqueue_time_":151260766355900}}, message='null'}, 
TestEvent{timestamp=151260766938600, config={   
"trigger":"search_rate_trigger",   "afterAction":[],   "stage":[ "FAILED",  
   "SUCCEEDED"],   
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
   "beforeAction":[]}, stage=SUCCEEDED, actionName='null', event={   
"id":"14fddca7e2330bc0T7tbbg5gq58duo66x9cjskvxgi",   
"source":"search_rate_trigger",   "eventTime":151260766347900,   
"eventType":"SEARCHRATE",   "properties":{ "node":{   
"127.0.0.1:34133_solr":3.930050086811845,   
"127.0.0.1:50633_solr":3.8187426788618777}, "replica":[   
"{\"core_node4\":{\n\"base_url\":\"https://127.0.0.1:34133/solr\",\n
\"node_name\":\"127.0.0.1:34133_solr\",\n
\"core\":\"collection1_shard1_replica_n3\",\n\"state\":\"active\",\n
\"type\":\"NRT\",\n\"rate\":3.930050086811845}}",   
"{\"core_node2\":{\n\"core\":\"collection1_shard1_replica_n1\",\n
\"leader\":\"true\",\n\"rate\":3.8187426788618777,\n
\"base_url\":\"https://127.0.0.1:50633/solr\",\n
\"node_name\":\"127.0.0.1:50633_solr\",\n\"state\":\"active\",\n
\"type\":\"NRT\"}}"], "collection":{"collection1":7.7487927656737225}, 
"shard":{"collection1":{"shard1":7.7487927656737225}}, 
"_enqueue_time_":151260766938000}}, message='null'}]} expected:<1> but 
was:<2>

Stack Trace:
java.lang.AssertionError: {srt=[TestEvent{timestamp=151260766395300, 
config={
  "trigger":"search_rate_trigger",
  "afterAction":[],
  "stage":[
"FAILED",
"SUCCEEDED"],
  
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
  "beforeAction":[]}, stage=SUCCEEDED, actionName='null', event={
  "id":"14fddca67856f000T7tbbg5gq58duo66x9cjskvxge",
  "source":"search_rate_trigger",
  "eventTime":151260765740800,
  "eventType":"SEARCHRATE",
  "properties":{
"node":{
  "127.0.0.1:34133_solr":1.6470850586359402,
  "127.0.0.1:50633_solr":1.4392005366721807},
"replica":[
  "{\"core_node4\":{\n\"base_url\":\"https://127.0.0.1:34133/solr\",\n  
  \"node_name\":\"127.0.0.1:34133_solr\",\n
\"core\":\"collection1_shard1_replica_n3\",\n\"state\":\"active\",\n
\"type\":\"NRT\",\n\"rate\":1.6470850586359402}}",
  "{\"core_node2\":{\n\"core\":\"collection1_shard1_replica_n1\",\n
\"leader\":\"true\",\n\"rate\":1.4392005366721807,\n
\"base_url\":\"https://127.0.0.1:50633/solr\",\n
\"node_name\":\"127.0.0.1:50633_solr\",\n\"state\":\"active\",\n
\"type\":\"NRT\"}}"],
"collection":{"collection1":3.086285595308121},
"shard":{"collection1":{"shard1":3.086285595308121}},
"_enqueue_time_":151260766355900}}, message='null'}, 
TestEvent{timestamp=151260766938600, config={
  "trigger":"search_rate_trigger",
  "afterAction":[],
  "stage":[
"FAILED",
"SUCCEEDED"],
  
"class":"org.apache.solr.cloud.autoscaling.TriggerIntegrationTest$TestTriggerListener",
  "beforeAction":[]}, stage=SUCCEEDED, actionName='null', event={
  "id":"14fddca7e2330bc0T7tbbg5gq58duo66x9cjskvxgi",
  "source":"search_rate_trigger",
  "eventTime":151260766347900,
  "eventType":"SEARCHRATE",
  "properties":{
"node":{
  "127.0.0.1:34133_solr":3.930050086811845,
  "127.0.0.1:50633_solr":3.8187426788618777},
"replica":[
  "{\"core_node4\":{\n\"base_url\":\"https://127.0.0.1:34133/solr\",\n  
  \"node_name\":\"127.0.0.1:34133_solr\",\n

[jira] [Assigned] (SOLR-11734) Add ones, zeroes and natural Stream Evaluators

2017-12-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein reassigned SOLR-11734:
-

Assignee: Joel Bernstein

> Add ones, zeroes and natural Stream Evaluators
> --
>
> Key: SOLR-11734
> URL: https://issues.apache.org/jira/browse/SOLR-11734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> The ones and zeros function return arrays of a given length populated with 
> ones or zeros. The function returns a natural number sequence.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11734) Add ones, zeroes and natural Stream Evaluators

2017-12-06 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-11734:
--
Fix Version/s: 7.3

> Add ones, zeroes and natural Stream Evaluators
> --
>
> Key: SOLR-11734
> URL: https://issues.apache.org/jira/browse/SOLR-11734
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.3
>
>
> The ones and zeros function return arrays of a given length populated with 
> ones or zeros. The function returns a natural number sequence.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11734) Add ones, zeroes and natural Stream Evaluators

2017-12-06 Thread Joel Bernstein (JIRA)
Joel Bernstein created SOLR-11734:
-

 Summary: Add ones, zeroes and natural Stream Evaluators
 Key: SOLR-11734
 URL: https://issues.apache.org/jira/browse/SOLR-11734
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Joel Bernstein


The ones and zeros function return arrays of a given length populated with ones 
or zeros. The function returns a natural number sequence.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11733) json.facet refinement fails to bubble up some long tail (overrequested) terms?

2017-12-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281258#comment-16281258
 ] 

Yonik Seeley commented on SOLR-11733:
-

I mentioned in SOLR-11729 the refinement algorithm being different (and for a 
single-level facet field, simpler).
It can be explained as:
1) find buckets to return as if you weren't doing refinement
2) for those buckets, make sure all shards have contributed to the statistics

I started with the simplest for obvious reasons... to get something out.  From 
a correctness POV, smarter faceting is equivalent to increasing the overrequest 
amount... we still can't make guarantees.
We could easily implement a mode for some field facets that does the "could 
this possibly be in the top N" logic to consider more buckets in the first 
phase... but only if it's not a sub-facet of another partial facet (a facet 
with something like a limit).
If a partial facet is a sub-facet of another partial-facet, the logic of what 
one can exclude seems to get harder, and then sub-facets need to add new 
candidate buckets to parent facets (I think? need to think about it more... but 
I guess that's part of my point ;-).  Good ideas perhaps, but definitely more 
difficult to implement.

Other refinement implementations could range all the way to "exact"... 
guarantee that no buckets are missed, and there's more than one way to go about 
that too.



> json.facet refinement fails to bubble up some long tail (overrequested) terms?
> --
>
> Key: SOLR-11733
> URL: https://issues.apache.org/jira/browse/SOLR-11733
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Something wonky is happening with {{json.facet}} refinement.
> "Long Tail" terms that may not be in the "top n" on every shard, but are in 
> the "top n + overrequest" for at least 1 shard aren't getting refined and 
> included in the aggragated response in some cases.
> I don't understand the code enough to explain this, but I have some steps to 
> reproduce that i'll post in a comment shortly



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-06 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281243#comment-16281243
 ] 

Yonik Seeley commented on SOLR-11729:
-

bq. Yonik Seeley: do you remember if there was the an explicit reason you 
choose those lower constants in the json.facet code?

Both sets of numbers were rather shots in the dark.  Two thoughts I had while 
lowering them:
- I had seen people suffering performance problems with really big limits... a 
50% overrequest can be overkill
- The cost of overrequest can now be much greater (due to nested facets & stats)

bq. It may seem like a small thing to worry about, but it can/will cause odd 
inconsistencies when people try to migrate

Aligning over-request limits still wouldn't prevent some differences... the 
refinement algorithm is currently different.
It seems like there are many logical ways to refine results - I originally 
thought about using refine:simple because I imagined we would have other 
implementations in the future.
Anyway, this one is the simplest one to think about and implement: the top 
buckets to return for all facets are determined in the first phase.  The second 
phase *only* gets contributions from other shards for those buckets.

Anyway... back to what the default amount of overrequest should be: I don't 
really have a strong opinion.




> Increase default overrequest ratio/count in json.facet to match existing 
> defaults for facet.overrequest.ratio & facet.overrequest.count ?
> -
>
> Key: SOLR-11729
> URL: https://issues.apache.org/jira/browse/SOLR-11729
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> When FacetComponent first got support for distributed search, the default 
> "effective shard limit" done on shards followed the formula...
> {code}
> limit = (int)(dff.initialLimit * 1.5) + 10;
> {code}
> ...over time, this became configurable with the introduction of some expert 
> level tuning options: {{facet.overrequest.ratio}} & 
> {{facet.overrequest.count}} -- but the defaults (and basic formula) remain 
> the same to this day...
> {code}
>   this.overrequestRatio
> = params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
> 1.5);
>   this.overrequestCount 
> = params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
> ...
>   private int doOverRequestMath(int limit, double ratio, int count) {
> // NOTE: normally, "1.0F < ratio"
> //
> // if the user chooses a ratio < 1, we allow it and don't "bottom out" at
> // the original limit until *after* we've also added the count.
> int adjustedLimit = (int) (limit * ratio) + count;
> return Math.max(limit, adjustedLimit);
>   }
> {code}
> However...
> When {{json.facet}} multi-shard refinement was added, the code was written 
> slightly diff:
> * there is an explicit {{overrequest:N}} (count) option
> * if {{-1 == overrequest}} (which is the default) then an "effective shard 
> limit" is computed using the same basic formula as in FacetComponet -- _*but 
> the constants are different*_...
> ** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
> * For any (non "-1") user specified {{overrequest}} value, it's added 
> verbatim to the {{limit}} (which may have been user specified, or may just be 
> the default)
> ** {{effectiveLimit += freq.overrequest;}}
> Given the design of the {{json.facet}} syntax, I can understand why the code 
> path for an "advanced" user specified {{overrequest:N}} option avoids using 
> any (implicit) ratio calculation and just does the straight addition of 
> {{limit += overrequest}}.
> What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
> common (default) case, and why those differ from the historically used 
> {{1.5}} and {{10}}.
> 
> It may seem like a small thing to worry about, but it can/will cause odd 
> inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
> {{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it 
> give people attempting these types of migrations the (mistaken) impression 
> that discrepancies they are seeing are because {{refine:true}} is not be 
> working.
> For this reason, I propose we change the (default) {{overrequest:-1}} 
> behavior to use the same constants as the equivilent FacetComponent code...
> {code}
> if (fcontext.isShard()) {
>   if (freq.overrequest == -1) {
> // add over-request if this is a shard request and if we have a small 
> offset (large offsets will already be gathering many more buckets than needed)
> if (freq.offset < 10) {
>   effectiveLimit = (long) (effectiveLimit * 1.5 + 10);

Re: Plans for release of current svn?

2017-12-06 Thread Andi Vajda
 Hi Petrus,

> On Dec 6, 2017, at 06:45, Petrus Hyvönen  wrote:
> 
> Hi Andi,
> 
> I'm thinking about packaging a beta version of JCC "next version" on 
> conda-forge as I would need some of the recent bug fixes in some software i'm 
> updating (orekit). 
> 
> Do you know what version number next JCC version will have, v 3.1 or 3.0.1? ( 
> I would prefer to use not a higher number for the beta release)

Sorry this has fallen behind. The JCC part should be version 3.1 and is more or 
less ready for release. It’s being held up by PyLucene whose tests got broken 
with the Lucene 7.0 release.

Andi..

> 
> Many Thanks and Hope all well,
> Best Regards
> /Petrus
> 
> 
> 
> 
> 
>> On Tue, Sep 5, 2017 at 6:19 AM, Andi Vajda  wrote:
>> 
>>  Hi Petrus,
>> 
>>> On Mon, 4 Sep 2017, Petrus Hyvönen wrote:
>>> 
>>> Are there plans for releasing a new version soon with the bugs that are
>>> fixed in current svn?
>> 
>> Yes, once Lucence 7.0 is released, I intend to do a PyLucene 7.0 release 
>> with the latest in SVN. The Lucene 7.0 release process is ongoing...
>> 
>> Andi..
>> 
>> 
>>> 
>>> The reason for asking is that i'm finalizing some automated build versions
>>> for anaconda python distribution and would prefer to have these bugs fixed
>>> and not release beta versions.
>>> 
>>> Currenly got 3.0 release working:
>>> https://github.com/conda-forge/staged-recipes/pull/3770
>>> 
>>> Best Regards & Many Thanks,
>>> /Petrus
>>> 
>>> 
>>> -- 
>>> _
>>> Petrus Hyvönen, Uppsala, Sweden
>>> Mobile Phone/SMS:+46 73 803 19 00
> 
> 
> 
> -- 
> _
> Petrus Hyvönen, Uppsala, Sweden
> Mobile Phone/SMS:+46 73 803 19 00


[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1556 - Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1556/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.util.TestSolrCLIRunExample

Error Message:
ObjectTracker found 4 object(s) that were not released!!! [TransactionLog, 
NRTCachingDirectory, NRTCachingDirectory, NRTCachingDirectory] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.update.TransactionLog  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.update.TransactionLog.(TransactionLog.java:190)  at 
org.apache.solr.update.UpdateLog.newTransactionLog(UpdateLog.java:448)  at 
org.apache.solr.update.UpdateLog.ensureLog(UpdateLog.java:1261)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:534)  at 
org.apache.solr.update.UpdateLog.add(UpdateLog.java:519)  at 
org.apache.solr.update.DirectUpdateHandler2.doNormalUpdate(DirectUpdateHandler2.java:352)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc0(DirectUpdateHandler2.java:271)
  at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:221)
  at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:910)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1121)
  at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:616)
  at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
  at 
org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:475)
  at 
org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)  
at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readOuterMostDocIterator(JavaBinUpdateRequestCodec.java:188)
  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readIterator(JavaBinUpdateRequestCodec.java:144)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:311) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec$1.readNamedList(JavaBinUpdateRequestCodec.java:130)
  at org.apache.solr.common.util.JavaBinCodec.readObject(JavaBinCodec.java:276) 
 at org.apache.solr.common.util.JavaBinCodec.readVal(JavaBinCodec.java:256)  at 
org.apache.solr.common.util.JavaBinCodec.unmarshal(JavaBinCodec.java:178)  at 
org.apache.solr.client.solrj.request.JavaBinUpdateRequestCodec.unmarshal(JavaBinUpdateRequestCodec.java:195)
  at 
org.apache.solr.handler.loader.JavabinLoader.parseAndLoadDocs(JavabinLoader.java:108)
  at org.apache.solr.handler.loader.JavabinLoader.load(JavabinLoader.java:55)  
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
  at 
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:68)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)  at 
org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)  at 
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:139)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:224)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.gzip.GzipHandler.handle(GzipHandler.java:426)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) 
 at org.eclipse.jetty.server.Server.handle(Server.java:534)  at 

[jira] [Commented] (SOLR-11733) json.facet refinement fails to bubble up some long tail (overrequested) terms?

2017-12-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281188#comment-16281188
 ] 

Hoss Man commented on SOLR-11733:
-


Steps to reproduce..

*Build Collection & Index Some Data*

{noformat}
# start up a small solr cluster
$ bin/solr -e cloud -noprompt
...

# NOTE: we're ignoring the getting started collection that was created
# we'll make our own using the implicit router with one shard per node

$ curl 
'http://localhost:8983/solr/admin/collections?action=CREATE=test=implicit=2=shardX,shardY'
...

# Index 5 docs to *each* shards with:
# - the same "top 5" terms in all 5 docs on both shards
# - a common "tail" term in 2 docs on *both* shards
#   - w/a total of 4 docs, 
# - some shard specific "distrating" terms that each appear in only 3 docs, and 
always on single shard
#   - On the 1st shard: there are 5 of these terms, such that 'tail' will be 
the #11 ranked term (on this shard)
#   - On the 2nd shard: 'tail' will be the #7 ranked term (on this shard)

$ curl -H 'Content-Type: application/json' 
'http://localhost:8983/solr/test/update?commit=true' --data-binary '[
{ "id": "1_1", "foo_t": "a1 a2 a3 a4 a5   x1 x2 x3 x4 x5" },
{ "id": "1_2", "foo_t": "a1 a2 a3 a4 a5   x1 x2 x3 x4 x5" },
{ "id": "1_3", "foo_t": "a1 a2 a3 a4 a5   x1 x2 x3 x4 x5" },
{ "id": "1_4", "foo_t": "a1 a2 a3 a4 a5   tail" },
{ "id": "1_5", "foo_t": "a1 a2 a3 a4 a5   tail" },
]'
...
$ curl -H 'Content-Type: application/json' 
'http://localhost:7574/solr/test/update?commit=true' --data-binary '[
{ "id": "2_1", "foo_t": "a1 a2 a3 a4 a5   yyy" },
{ "id": "2_2", "foo_t": "a1 a2 a3 a4 a5   yyy" },
{ "id": "2_3", "foo_t": "a1 a2 a3 a4 a5   yyy" },
{ "id": "2_4", "foo_t": "a1 a2 a3 a4 a5tail" },
{ "id": "2_5", "foo_t": "a1 a2 a3 a4 a5tail" },
]'
...
{noformat}


*Sanity Check Queries*

With an excessive 'limit' or 'overrequest' we can verify that 'tail' is the #6 
ranked term overall (even with refinement explicitly disabled)

{noformat}
$ curl http://localhost:7574/solr/test/select -d 
'q=*:*=json=0={foo:{type:terms,field:foo_t,limit:7,overrequest:100,refine:false}}'
...
  "response":{"numFound":10,"start":0,"maxScore":1.0,"docs":[]
  },
  "facets":{
"count":10,
"foo":{
  "buckets":[{
  "val":"a1",
  "count":10},
{
  "val":"a2",
  "count":10},
{
  "val":"a3",
  "count":10},
{
  "val":"a4",
  "count":10},
{
  "val":"a5",
  "count":10},
{
  "val":"tail",
  "count":4},
{
  "val":"x1",
  "count":3}]}}}

$ curl http://localhost:7574/solr/test/select -d 
'q=*:*=json=0={foo:{type:terms,field:foo_t,limit:100,overrequest:0,refine:false}}'
...
  "facets":{
"count":10,
"foo":{
  "buckets":[{
  "val":"a1",
  "count":10},
{
  "val":"a2",
  "count":10},
{
  "val":"a3",
  "count":10},
{
  "val":"a4",
  "count":10},
{
  "val":"a5",
  "count":10},
{
  "val":"tail",
  "count":4},
{
  "val":"x1",
  "count":3},
...
{noformat}

Likewise, if we query each shard individual (w/ {{distrib=false}} ) we confirm 
that the "tail" term shows up in it's expected ranking...

{noformat}
$ curl http://localhost:8983/solr/test/select -d 
'distrib=false=*:*=json=0={foo:{type:terms,field:foo_t,limit:11}}'
...
  "buckets":[{
  "val":"a1",
  "count":5},
{
  "val":"a2",
  "count":5},
{
  "val":"a3",
  "count":5},
{
  "val":"a4",
  "count":5},
{
  "val":"a5",
  "count":5},
{
  "val":"x1",
  "count":3},
{
  "val":"x2",
  "count":3},
{
  "val":"x3",
  "count":3},
{
  "val":"x4",
  "count":3},
{
  "val":"x5",
  "count":3},
{
  "val":"tail",
  "count":2}]}}}

$ curl http://localhost:7574/solr/test/select -d 
'distrib=false=*:*=json=0={foo:{type:terms,field:foo_t,limit:7}}'
...
  "buckets":[{
  "val":"a1",
  "count":5},
{
  "val":"a2",
  "count":5},
{
  "val":"a3",
  "count":5},
{
  "val":"a4",
  "count":5},
{
  "val":"a5",
  "count":5},
{
  "val":"yyy",
  "count":3},
{
  "val":"tail",
  "count":2}]}}}
{noformat}



*Queries that Fail*

w/refinement, a limit of 6 (plus the implicit default overrequest) should be 
enough to find 'tail' -- but it's not included in the response from this 
query...

{noformat}
$ curl http://localhost:7574/solr/test/select -d 

[jira] [Created] (SOLR-11733) json.facet refinement fails to bubble up some long tail (overrequested) terms?

2017-12-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-11733:
---

 Summary: json.facet refinement fails to bubble up some long tail 
(overrequested) terms?
 Key: SOLR-11733
 URL: https://issues.apache.org/jira/browse/SOLR-11733
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man



Something wonky is happening with {{json.facet}} refinement.

"Long Tail" terms that may not be in the "top n" on every shard, but are in the 
"top n + overrequest" for at least 1 shard aren't getting refined and included 
in the aggragated response in some cases.

I don't understand the code enough to explain this, but I have some steps to 
reproduce that i'll post in a comment shortly





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11691) v2 api for CREATEALIAS fails if given a list with more than one element

2017-12-06 Thread Jason Gerlowski (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gerlowski updated SOLR-11691:
---
Attachment: SOLR-11691.patch
repro.sh

I've attached a patch to correct this behavior.  With this patch, CREATEALIAS 
now supports a proper JSON array {{["a", "b"]}}, as well as the previously 
accepted formats (comma-delimited values, and comma-delimited values inside a 
JSON array).

Also attached is a bash script reproducing the problem (and exhibiting the 
correct behavior when the patch has been applied).



> v2 api for CREATEALIAS fails if given a list with more than one element
> ---
>
> Key: SOLR-11691
> URL: https://issues.apache.org/jira/browse/SOLR-11691
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Affects Versions: master (8.0)
>Reporter: Gus Heck
> Attachments: SOLR-11691.patch, repro.sh
>
>
> Successful, correct:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1"]
>   }
> }
> {code}
> Successful, but wrong:
> {code}
> {
>   "create-alias" : {
> "name": "testalias1",
> "collections":["collection1,collection2"]
>   }
> }
> {code}
> Fails, but should work based on details in _introspect:
> {code}
> {
>   "create-alias" : {
> "name": "testalias2",
> "collections":["collection1","collection2"]
>   }
> }
> {code}
> The error returned is:
> {code}
> {
> "responseHeader": {
> "status": 400,
> "QTime": 25
> },
> "Operation createalias caused exception:": 
> "org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: 
> Can't create collection alias for collections='[collection1, collection2]', 
> '[collection1' is not an existing collection or alias",
> "exception": {
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "rspCode": 400
> },
> "error": {
> "metadata": [
> "error-class",
> "org.apache.solr.common.SolrException",
> "root-error-class",
> "org.apache.solr.common.SolrException"
> ],
> "msg": "Can't create collection alias for collections='[collection1, 
> collection2]', '[collection1' is not an existing collection or alias",
> "code": 400
> }
> }
> {code}
> whereas 
> {code}
> GET localhost:8981/api/c
> {code}
> yields
> {code}
> {
> "responseHeader": {
> "status": 0,
> "QTime": 0
> },
> "collections": [
> "collection2",
> "collection1"
> ]
> }
> {code}
> Intropsection shows:
> {code}
>  "collections": {
>  "type": "array",
>  "description": "The list of collections to be known as this alias.",
>   "items": {
>   "type": "string"
>}
>   },
> {code}
> Basically the property is documented as an array, but parsed as a string (I 
> suspect it's parsed as a list but then the toString value of the list is 
> used, but haven't checked). We have a conflict between what is natural for 
> expressing a list in JSON (an array) and what is natural for expressing a 
> list as a parameter (comma separation). I'm unsure how best to resolve this, 
> as it's a question of making "direct translation" to v2 work vs making v2 
> more natural. I tend to favor accepting an array and therefore making v2 more 
> natural which would be more work, but want to know what others think. From a 
> back compatibility perspective, that direction also makes this clearly a bug 
> fix rather than a breaking change since it doesn't match the _introspect 
> documentation. I also haven't tried looking at old versions to find any 
> evidence as to whether the documented form worked previously... so I don't 
> know if this is a regression or if it never worked.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 950 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/950/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.TestCollectionAPI.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:42635/mr_mh/u

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:42635/mr_mh/u
at 
__randomizedtesting.SeedInfo.seed([E3487A53CB3A80ED:6B1C458965C6ED15]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-8080) GeoExactCircle improvement

2017-12-06 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281176#comment-16281176
 ] 

Karl Wright commented on LUCENE-8080:
-

[~ivera], now I'm getting:

{code}
fatal: corrupt patch at line 90
{code}

> GeoExactCircle improvement
> --
>
> Key: LUCENE-8080
> URL: https://issues.apache.org/jira/browse/LUCENE-8080
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-8080-test.patch, LUCENE-8080.patch
>
>
> Hi [~daddywri],
> Current implementation of GeoExactCircle seems to work well for planet models 
> with low flattening (~|0.025|). When flattening increase shapes start 
> becoming invalid because of the cutting angle of the circle plane which 
> results on the center of the circle ending up on the wrong side of the plane. 
> I propose a new version of GeoExactCircle that tries to overcome this problem 
> by creating a new plane for a circle sector in such cases. The new plane is 
> built built for each sector when needed by using two points from the circle 
> edge and the center of the world. The plane is such that it is built as close 
> as possible to the circle plane of the sector. Points from the circle plane 
> must not be within the new plane and the center of the circle must be within 
> the plane.
> This approach seems to work well up to planets with flattening up to around 
> ~|0.1|. I think after that the cutting angles of circle planes can be so thin 
> that the apporach is not valid. 
> Therefore I propose to add this new approach and limit the creation of such 
> circles to planet models with flattening lower than |0.1|. Probably a 
> limitation that does not affect most of the realistic cases.
> In addition this new version forces a minimum of 4 sectors in a circle. The 
> issue on LUCENE-8071 came up again for circles of any radius so we should 
> enforce it for all circles.
> Thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Description: 
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single character queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength (4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

I have included screenshots of the response.



  was:
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single character queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

I have included screenshots of the response.




> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single character queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength (4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> I have included screenshots of the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Description: 
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

I have included screenshots of the response.



  was:
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.




> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> I have included screenshots of the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Description: 
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single character queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

I have included screenshots of the response.



  was:
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

I have included screenshots of the response.




> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single character queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> I have included screenshots of the response.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Comment: was deleted

(was: One character spellcheck response)

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Attachment: Screen Shot 2017-12-06 at 7.09.33 PM.png

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png, Screen Shot 
> 2017-12-06 at 7.09.33 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Attachment: Screen Shot 2017-12-06 at 7.03.24 PM.png

One character spellcheck response

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> one character:
> two characters:



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Description: 
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.



  was:
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

one character:

two characters:




> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Description: 
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.

one character:

two characters:



  was:
When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.





> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
> Attachments: Screen Shot 2017-12-06 at 7.03.24 PM.png
>
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.
> one character:
> two characters:



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single character query cases

2017-12-06 Thread Evan Fagerberg (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Evan Fagerberg updated SOLR-11732:
--
Summary: Solr 5.5,6.6 spellchecker does not return the same response in 
single character query cases  (was: Solr 5.5,6.6 spellchecker does not return 
the same response in single query term cases)

> Solr 5.5,6.6 spellchecker does not return the same response in single 
> character query cases
> ---
>
> Key: SOLR-11732
> URL: https://issues.apache.org/jira/browse/SOLR-11732
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Affects Versions: 5.5, 6.6
>Reporter: Evan Fagerberg
>Priority: Minor
>
> When running the getting started example of solr 5.5 and 6.6 I noticed a 
> peculiar behavior that occurs for single term queries under the /spell 
> requestHandler
> When searcher for a single term the response from solr does not have a spell 
> check section compared to a two character query which does.
> This seems to be independent from things like minPrefix (1 by default) and 
> minQueryLength 4 by default).
> I would expect that any number of character would return a spellcheck section.
> I first came across this when trying to upgrade a solr plugin that had tests 
> using single queries to assert suggestion counts.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11732) Solr 5.5,6.6 spellchecker does not return the same response in single query term cases

2017-12-06 Thread Evan Fagerberg (JIRA)
Evan Fagerberg created SOLR-11732:
-

 Summary: Solr 5.5,6.6 spellchecker does not return the same 
response in single query term cases
 Key: SOLR-11732
 URL: https://issues.apache.org/jira/browse/SOLR-11732
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spellchecker
Affects Versions: 6.6, 5.5
Reporter: Evan Fagerberg
Priority: Minor


When running the getting started example of solr 5.5 and 6.6 I noticed a 
peculiar behavior that occurs for single term queries under the /spell 
requestHandler

When searcher for a single term the response from solr does not have a spell 
check section compared to a two character query which does.

This seems to be independent from things like minPrefix (1 by default) and 
minQueryLength 4 by default).

I would expect that any number of character would return a spellcheck section.

I first came across this when trying to upgrade a solr plugin that had tests 
using single queries to assert suggestion counts.






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread Karthik Ramachandran (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Ramachandran updated SOLR-11331:

Attachment: SOLR-11331.patch

> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
> Attachments: SOLR-11331.patch
>
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281128#comment-16281128
 ] 

ASF GitHub Bot commented on SOLR-11331:
---

Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/287
  
@uschindler I have managed to copy all required folder to eclipse-build, 
can you review the changes?


> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #287: SOLR-11331: Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread mrkarthik
Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/287
  
@uschindler I have managed to copy all required folder to eclipse-build, 
can you review the changes?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281122#comment-16281122
 ] 

ASF GitHub Bot commented on SOLR-11331:
---

Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/245
  
Cannot merge new changes, created PR 
https://github.com/apache/lucene-solr/pull/287


> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281123#comment-16281123
 ] 

ASF GitHub Bot commented on SOLR-11331:
---

Github user mrkarthik closed the pull request at:

https://github.com/apache/lucene-solr/pull/245


> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #245: SOLR-11331: Ability to Debug Solr With Eclips...

2017-12-06 Thread mrkarthik
Github user mrkarthik closed the pull request at:

https://github.com/apache/lucene-solr/pull/245


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #245: SOLR-11331: Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread mrkarthik
Github user mrkarthik commented on the issue:

https://github.com/apache/lucene-solr/pull/245
  
Cannot merge new changes, created PR 
https://github.com/apache/lucene-solr/pull/287


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11331) Ability to Debug Solr With Eclipse IDE

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16281120#comment-16281120
 ] 

ASF GitHub Bot commented on SOLR-11331:
---

GitHub user mrkarthik opened a pull request:

https://github.com/apache/lucene-solr/pull/287

SOLR-11331: Ability to Debug Solr With Eclipse IDE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mrkarthik/lucene-solr jira/SOLR-11331

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #287






> Ability to Debug Solr With Eclipse IDE
> --
>
> Key: SOLR-11331
> URL: https://issues.apache.org/jira/browse/SOLR-11331
> Project: Solr
>  Issue Type: Improvement
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
>Priority: Minor
>
> Ability to Debug Solr With Eclipse IDE



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #287: SOLR-11331: Ability to Debug Solr With Eclips...

2017-12-06 Thread mrkarthik
GitHub user mrkarthik opened a pull request:

https://github.com/apache/lucene-solr/pull/287

SOLR-11331: Ability to Debug Solr With Eclipse IDE



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mrkarthik/lucene-solr jira/SOLR-11331

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/287.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #287






---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.2 - Build # 1 - Failure

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.2/1/

1 tests failed.
FAILED:  org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock

Error Message:
Process died abnormally expected:<0> but was:<1>

Stack Trace:
java.lang.AssertionError: Process died abnormally expected:<0> but was:<1>
at 
__randomizedtesting.SeedInfo.seed([BC66C23D304C6070:B10D23293616CDA6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.codecs.TestCodecLoadingDeadlock.testDeadlock(TestCodecLoadingDeadlock.java:72)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2.evaluate(ThreadLeakControl.java:404)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSuite(RandomizedRunner.java:705)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.access$200(RandomizedRunner.java:139)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$2.run(RandomizedRunner.java:626)




Build Log:
[...truncated 14 lines...]
ERROR: Error cloning remote repo 'origin'
hudson.plugins.git.GitException: Command "git fetch --tags --progress 
git://git.apache.org/lucene-solr.git +refs/heads/*:refs/remotes/origin/*" 
returned status code 128:
stdout: 
stderr: remote: Counting objects: 140508   
remote: Counting objects: 297748   
remote: Counting objects: 418079   
remote: Counting objects: 535334   
remote: Counting objects: 672926   
remote: Counting objects: 816070   
remote: Counting objects: 931490, done.
remote: Compressing objects:   0% (1/184496)   
remote: Compressing objects:   1% (1845/184496)   
remote: Compressing objects:   2% (3690/184496)   
remote: Compressing objects:   3% (5535/184496)   
remote: Compressing objects:   4% (7380/184496)   
remote: Compressing objects:   5% (9225/184496)   
remote: Compressing objects:   6% (11070/184496)   
remote: Compressing objects:   7% (12915/184496)   
remote: Compressing objects:   8% (14760/184496)   
remote: Compressing objects:   9% (16605/184496)   
remote: Compressing objects:  10% (18450/184496)   
remote: Compressing objects:  11% (20295/184496)   
remote: Compressing objects:  12% (22140/184496)   
remote: Compressing objects:  13% (23985/184496)   
remote: Compressing objects:  14% (25830/184496)   
remote: Compressing objects:  15% (27675/184496)   
remote: Compressing objects:  16% (29520/184496)   
remote: Compressing objects:  17% (31365/184496)   
remote: Compressing objects:  18% (33210/184496)   
remote: Compressing objects:  19% (35055/184496)   
remote: Compressing objects:  20% (36900/184496)   
remote: Compressing objects:  21% (38745/184496)   
remote: Compressing objects:  22% (40590/184496)   
remote: Compressing objects:  23% (42435/184496)   
remote: Compressing objects:  24% (44280/184496)   
remote: Compressing objects:  25% (46124/184496)   
remote: Compressing objects:  26% 

[jira] [Updated] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-06 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-11729:

Description: 
When FacetComponent first got support for distributed search, the default 
"effective shard limit" done on shards followed the formula...

{code}
limit = (int)(dff.initialLimit * 1.5) + 10;
{code}

...over time, this became configurable with the introduction of some expert 
level tuning options: {{facet.overrequest.ratio}} & {{facet.overrequest.count}} 
-- but the defaults (and basic formula) remain the same to this day...

{code}
  this.overrequestRatio
= params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
1.5);
  this.overrequestCount 
= params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
...
  private int doOverRequestMath(int limit, double ratio, int count) {
// NOTE: normally, "1.0F < ratio"
//
// if the user chooses a ratio < 1, we allow it and don't "bottom out" at
// the original limit until *after* we've also added the count.
int adjustedLimit = (int) (limit * ratio) + count;
return Math.max(limit, adjustedLimit);
  }
{code}

However...


When {{json.facet}} multi-shard refinement was added, the code was written 
slightly diff:

* there is an explicit {{overrequest:N}} (count) option
* if {{-1 == overrequest}} (which is the default) then an "effective shard 
limit" is computed using the same basic formula as in FacetComponet -- _*but 
the constants are different*_...
** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
* For any (non "-1") user specified {{overrequest}} value, it's added verbatim 
to the {{limit}} (which may have been user specified, or may just be the 
default)
** {{effectiveLimit += freq.overrequest;}}


Given the design of the {{json.facet}} syntax, I can understand why the code 
path for an "advanced" user specified {{overrequest:N}} option avoids using any 
(implicit) ratio calculation and just does the straight addition of {{limit += 
overrequest}}.

What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
common (default) case, and why those differ from the historically used {{1.5}} 
and {{10}}.



It may seem like a small thing to worry about, but it can/will cause odd 
inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
{{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it give 
people attempting these types of migrations the (mistaken) impression that 
discrepancies they are seeing are because {{refine:true}} is not be working.

For this reason, I propose we change the (default) {{overrequest:-1}} behavior 
to use the same constants as the equivilent FacetComponent code...

{code}
if (fcontext.isShard()) {
  if (freq.overrequest == -1) {
// add over-request if this is a shard request and if we have a small 
offset (large offsets will already be gathering many more buckets than needed)
if (freq.offset < 10) {
  effectiveLimit = (long) (effectiveLimit * 1.5 + 10);
}
...
{code}


  was:
When FacetComponent first got support for distributed search, the default 
"effective shard limit" done on shards followed the formula...

{code}
limit = (int)(dff.initialLimit * 1.5) + 10;
{code}

...over time, this became configurable with the introduction of some expert 
level tuning options: {{facet.overrequest.ratio}} & {{facet.overrequest.count}} 
-- but the defaults (and basic formula) remain the same to this day...

{code}
  this.overrequestRatio
= params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
1.5);
  this.overrequestCount 
= params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
...
  private int doOverRequestMath(int limit, double ratio, int count) {
// NOTE: normally, "1.0F < ratio"
//
// if the user chooses a ratio < 1, we allow it and don't "bottom out" at
// the original limit until *after* we've also added the count.
int adjustedLimit = (int) (limit * ratio) + count;
return Math.max(limit, adjustedLimit);
  }
{code}

However...


When {{json.facet}} multi-shard refinement was added, the code was written 
slightly diff:

* there is an explicit {{overrequest:N}} (count) option
* if {{-1 == overrequest}} (which is the default) then an "effective shard 
limit" is computed using the same basic formula as in FacetComponet -- _*but 
the constants are different*_...
** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
* For any (non "-1") user specified {{overrequest}} value, it's added verbatim 
to the {{limit}} (which may have been user specified, or may just be the 
default)
** {{effectiveLimit += freq.overrequest;}}


Given the design of the {{json.facet}} syntax, I can understand why the code 
path for an "advanced" user specified {{overrequest:N}} option avoids using any 
(implicit) ratio calculation and just does the 

[jira] [Commented] (SOLR-11304) Exception while returning document if LatLonPointSpatialField field is not stored

2017-12-06 Thread Karthik Ramachandran (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280898#comment-16280898
 ] 

Karthik Ramachandran commented on SOLR-11304:
-

Sure, will submit it sometime this week

> Exception while returning document if LatLonPointSpatialField field is not 
> stored
> -
>
> Key: SOLR-11304
> URL: https://issues.apache.org/jira/browse/SOLR-11304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Fix For: 7.2
>
> Attachments: SOLR-11304.patch
>
>
> NullPointerException while retrieving the document if LatLonPointSpatialField 
> is not stored
> {code:xml}
>  docValues="true"/>
>  stored="false"/>
> {code}
> {code}
> 2017-08-31 12:18:23.368 ERROR (qtp1866850137-23) [   x:latlon] 
> o.a.s.s.HttpSolrCall null:java.lang.NullPointerException
>   at 
> org.apache.solr.search.SolrDocumentFetcher.decorateDocValueFields(SolrDocumentFetcher.java:510)
>   at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:160)
>   at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:1)
>   at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
>   at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
>   at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
>   at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> 

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 902 - Still Failing

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/902/

No tests ran.

Build Log:
[...truncated 28007 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.07 sec (3.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.8 MB in 1.07 sec (27.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 70.9 MB in 1.63 sec (43.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 81.3 MB in 1.94 sec (42.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6184 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] 
   [smoker] command "export JAVA_HOME="/home/jenkins/tools/java/latest1.8" 
PATH="/home/jenkins/tools/java/latest1.8/bin:$PATH" 
JAVACMD="/home/jenkins/tools/java/latest1.8/bin/java"; ant validate" failed:
   [smoker] Buildfile: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build.xml
   [smoker] 
   [smoker] common.compile-tools:
   [smoker] 
   [smoker] -check-git-state:
   [smoker] 
   [smoker] -git-cleanroot:
   [smoker] 
   [smoker] -copy-git-state:
   [smoker] 
   [smoker] git-autoclean:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: Apache Ivy 2.4.0 - 20141213170938 :: 
http://ant.apache.org/ivy/ ::
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] resolve:
   [smoker] 
   [smoker] init:
   [smoker] 
   [smoker] compile-core:
   [smoker] [mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker] [javac] Compiling 7 source files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker]  [copy] Copying 1 file to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/build/tools/classes/java
   [smoker] 
   [smoker] compile-tools:
   [smoker] 
   [smoker] compile-tools:
   [smoker] 
   [smoker] common.compile-tools:
   [smoker] 
   [smoker] -check-git-state:
   [smoker] 
   [smoker] -git-cleanroot:
   [smoker] 
   [smoker] -copy-git-state:
   [smoker] 
   [smoker] git-autoclean:
   [smoker] 
   [smoker] ivy-availability-check:
   [smoker] [loadresource] Do not set property disallowed.ivy.jars.list as its 
length is 0.
   [smoker] 
   [smoker] -ivy-fail-disallowed-ivy-version:
   [smoker] 
   [smoker] ivy-fail:
   [smoker] 
   [smoker] ivy-configure:
   [smoker] [ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/lucene-8.0.0/top-level-ivy-settings.xml
   [smoker] 
   [smoker] resolve:
   [smoker] 
   [smoker] init:
   [smoker] 
   [smoker] compile-core:
   [smoker] 
   [smoker] compile-tools:
   [smoker] [mkdir] Created dir: 

[JENKINS-EA] Lucene-Solr-7.2-Linux (64bit/jdk-10-ea+32) - Build # 21 - Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Linux/21/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZk2Test.test

Error Message:
Timeout occured while waiting response from server at: http://127.0.0.1:41643

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:41643
at 
__randomizedtesting.SeedInfo.seed([E0D59D6BFD9E7742:6881A2B153621ABA]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:314)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (SOLR-11731) LatLonPointSpatialField could be improved to return the lat/lon from docValues

2017-12-06 Thread David Smiley (JIRA)
David Smiley created SOLR-11731:
---

 Summary: LatLonPointSpatialField could be improved to return the 
lat/lon from docValues
 Key: SOLR-11731
 URL: https://issues.apache.org/jira/browse/SOLR-11731
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: spatial
Reporter: David Smiley
Priority: Minor


You can only return the lat & lon from a LatLonPointSpatialField if you set 
stored=true.  But we could allow this (albeit at a small loss in precision) if 
stored=false and docValues=true.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11304) Exception while returning document if LatLonPointSpatialField field is not stored

2017-12-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280855#comment-16280855
 ] 

David Smiley commented on SOLR-11304:
-

Can you please provide a patch updated for master (or alternatively a PR) to 
SOLR-11731 which I just filed?

> Exception while returning document if LatLonPointSpatialField field is not 
> stored
> -
>
> Key: SOLR-11304
> URL: https://issues.apache.org/jira/browse/SOLR-11304
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6, 7.0
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Fix For: 7.2
>
> Attachments: SOLR-11304.patch
>
>
> NullPointerException while retrieving the document if LatLonPointSpatialField 
> is not stored
> {code:xml}
>  docValues="true"/>
>  stored="false"/>
> {code}
> {code}
> 2017-08-31 12:18:23.368 ERROR (qtp1866850137-23) [   x:latlon] 
> o.a.s.s.HttpSolrCall null:java.lang.NullPointerException
>   at 
> org.apache.solr.search.SolrDocumentFetcher.decorateDocValueFields(SolrDocumentFetcher.java:510)
>   at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:160)
>   at org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:1)
>   at 
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)
>   at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)
>   at 
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)
>   at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)
>   at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 273 - Still Failing

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/273/

All tests passed

Build Log:
[...truncated 1065 lines...]
   [junit4] JVM J1: stdout was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/temp/junit4-J1-20171206_192701_4044992831442732580301.sysout
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] #
   [junit4] # There is insufficient memory for the Java Runtime Environment to 
continue.
   [junit4] # Native memory allocation (mmap) failed to map 1572864 bytes for 
committing reserved memory.
   [junit4] # An error report file with more information is saved as:
   [junit4] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/J1/hs_err_pid7234.log
   [junit4] <<< JVM J1: EOF 

   [junit4] JVM J1: stderr was not empty, see: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/temp/junit4-J1-20171206_192701_4044736043498099302961.syserr
   [junit4] >>> JVM J1 emitted unexpected output (verbatim) 
   [junit4] Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0xffd0, 1572864, 0) failed; error='Cannot 
allocate memory' (errno=12)
   [junit4] <<< JVM J1: EOF 

[...truncated 705 lines...]
   [junit4] ERROR: JVM J1 ended with an exception, command line: 
/usr/local/asfpackages/java/jdk1.8.0_144/jre/bin/java 
-XX:+HeapDumpOnOutOfMemoryError 
-XX:HeapDumpPath=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/heapdumps
 -ea -esa -Dtests.prefix=tests -Dtests.seed=5AF8949AA6AFADAF -Xmx512M 
-Dtests.iters= -Dtests.verbose=false -Dtests.infostream=false 
-Dtests.codec=random -Dtests.postingsformat=random 
-Dtests.docvaluesformat=random -Dtests.locale=random -Dtests.timezone=random 
-Dtests.directory=random -Dtests.linedocsfile=europarl.lines.txt.gz 
-Dtests.luceneMatchVersion=7.3.0 -Dtests.cleanthreads=perMethod 
-Djava.util.logging.config.file=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/tools/junit4/logging.properties
 -Dtests.nightly=false -Dtests.weekly=false -Dtests.monster=false 
-Dtests.slow=true -Dtests.asserts=true -Dtests.multiplier=2 -DtempDir=./temp 
-Djava.io.tmpdir=./temp 
-Djunit4.tempDir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/temp
 -Dcommon.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene 
-Dclover.db.dir=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/clover/db
 
-Djava.security.policy=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/tools/junit4/tests.policy
 -Dtests.LUCENE_VERSION=7.3.0 -Djetty.testMode=1 -Djetty.insecurerandom=1 
-Dsolr.directoryFactory=org.apache.solr.core.MockDirectoryFactory 
-Djava.awt.headless=true -Djdk.map.althashing.threshold=0 
-Dtests.src.home=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x 
-Djava.security.egd=file:/dev/./urandom 
-Djunit4.childvm.cwd=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/lucene/build/core/test/J1
 -Djunit4.childvm.id=1 -Djunit4.childvm.count=3 -Dtests.leaveTemporary=false 
-Dtests.filterstacks=true 
-Djava.security.manager=org.apache.lucene.util.TestSecurityManager 
-Dfile.encoding=ISO-8859-1 -classpath 

Re: Lucene/Solr 7.2

2017-12-06 Thread Andrzej Białecki

> On 6 Dec 2017, at 18:45, Andrzej Białecki  
> wrote:
> 
> I attached the patch to SOLR-11714, which disables the ‘searchRate’ trigger - 
> if there are no objections I’ll commit it shortly to branch_7.2.


This has been committed now to branch_7_2 and I don’t have any other open 
issues for 7.2. Thanks!


> 
>> On 6 Dec 2017, at 15:51, Andrzej Białecki > > wrote:
>> 
>> 
>>> On 6 Dec 2017, at 15:35, Andrzej Białecki >> > wrote:
>>> 
>>> SOLR-11458 is committed and resolved - thanks for the patience.
>> 
>> 
>> Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new 
>> feature (searchRate autoscaling trigger). It’s probably best to disable this 
>> feature in 7.2 rather than releasing a broken version, so I’d like to commit 
>> a patch that disables it (plus a note in CHANGES.txt).
>> 
>> 
>>> 
>>> 
>>> 
 On 6 Dec 2017, at 14:02, Adrien Grand > wrote:
 
 Thanks for the heads up, Anshum.
 
 This leaves us with only SOLR-11458 to wait for before building a RC 
 (which might be ready but just not marked as resolved).
>>> 
 
 
 Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya 
 > a écrit :
 Hi Adrien,
 I'm planning to skip SOLR-11624 for this release (as per my last comment 
 https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121
  
 ).
  If someone has an objection, please let me know; otherwise, please feel 
 free to proceed with the release.
 I'll continue working on it anyway, and shall try to have it ready for the 
 next release.
 Thanks,
 Ishan
 
 On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand > wrote:
 FYI I created the new branch for 7.2, so you will have to backport to this 
 branch. No hurry though, I mostly created the branch so that it's fine to 
 cherry-pick changes that may wait for 7.3 to be released.
 
 Le mer. 6 déc. 2017 à 08:53, Adrien Grand > a écrit :
 Sorry to hear that Ishan, I hope you are doing better now. +1 to get 
 SOLR-11624 in.
 
 Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya 
 > a écrit :
 I was a bit unwell over the weekend and yesterday; I'm working on a very 
 targeted fix for SOLR-11624 right now; I expect it to take another 5-6 
 hours.
 Is that fine with you, Adrien? If not, please go ahead with the release, 
 and I'll volunteer later for a bugfix release for this after 7.2 is out.
 
 On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand > wrote:
 Fine with me.
 
 
 Le mar. 5 déc. 2017 à 22:34, Varun Thacker > a écrit :
 Hi Adrien,
 
 I'd like to commit SOLR-11590 . The issue had a patch couple of weeks ago 
 and has been reviewed but never got committed. I've run all the tests 
 twice as well to verify.
 
 On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki 
 > 
 wrote:
 
> On 5 Dec 2017, at 18:05, Adrien Grand  > wrote:
> 
> Andrzej, ok to merge since it is a bug fix. Since we're close to the RC 
> build, maybe try to get someone familiar with the code to review it to 
> make sure it doesn't have unexpected side-effects?
 
 Sure I’ll do this - thanks!
 
> 
> Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki 
>  > a écrit :
> Adrien,
> 
> If it’s ok I would also like to merge SOLR-11458, this significantly 
> reduces the chance of accidental data loss when using MoveReplicaCmd.
> 
>> On 5 Dec 2017, at 14:44, Adrien Grand > > wrote:
>> 
>> Quick update:
>> 
>> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been merged, they 
>> will be in 7.2.
>> 
>> LUCENE-8048 and SOLR-11624 are still open. 
>> 
>> LUCENE-8048 looks like it could make things better in some cases but I 
>> don't think it is required for 7.2, so I don't plan to hold the release 
>> on it.
>> 
>> SOLR-11624 

[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280831#comment-16280831
 ] 

ASF subversion and git services commented on SOLR-11714:


Commit 034daf424c13b467c16629823bc3b94395c738f3 in lucene-solr's branch 
refs/heads/branch_7_2 from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=034daf4 ]

SOLR-11714: Disable searchRate trigger due to a suggester bug.


> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.2

2017-12-06 Thread Christine Poerschke (BLOOMBERG/ LONDON)
No worries :)

At the risk of causing further confusion, I've added another note to SOLR-9743 
but am not sure how one would most easily move forward with the problem, 
perhaps there is no easy way?

From: dev@lucene.apache.org At: 12/06/17 19:42:30To:  dev@lucene.apache.org
Subject: Re: Lucene/Solr 7.2

Woah, twice the same mistake with different people. Sorry about that!

Le mer. 6 déc. 2017 à 20:28, Anshum Gupta  a écrit :

You meant Ishan here, right ? :) 

-Anshum


On Dec 6, 2017, at 5:02 AM, Adrien Grand  wrote:
Thanks for the heads up, Anshum.

This leaves us with only SOLR-11458 to wait for before building a RC (which 
might be ready but just not marked as resolved).


Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya  a 
écrit :

Hi Adrien,
I'm planning to skip SOLR-11624 for this release (as per my last comment 
https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121).
 If someone has an objection, please let me know; otherwise, please feel free 
to proceed with the release.
I'll continue working on it anyway, and shall try to have it ready for the next 
release.
Thanks,
Ishan

On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  wrote:

FYI I created the new branch for 7.2, so you will have to backport to this 
branch. No hurry though, I mostly created the branch so that it's fine to 
cherry-pick changes that may wait for 7.3 to be released.

Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a écrit :

Sorry to hear that Ishan, I hope you are doing better now. +1 to get SOLR-11624 
in.

Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya  a 
écrit :

I was a bit unwell over the weekend and yesterday; I'm working on a very 
targeted fix for SOLR-11624 right now; I expect it to take another 5-6 hours.
Is that fine with you, Adrien? If not, please go ahead with the release, and 
I'll volunteer later for a bugfix release for this after 7.2 is out.

On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand  wrote:


Fine with me. 

Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a écrit :

Hi Adrien,

I'd like to commit SOLR-11590 . The issue had a patch couple of weeks ago and 
has been reviewed but never got committed. I've run all the tests twice as well 
to verify.

On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki 
 wrote:


On 5 Dec 2017, at 18:05, Adrien Grand  wrote:
Andrzej, ok to merge since it is a bug fix. Since we're close to the RC build, 
maybe try to get someone familiar with the code to review it to make sure it 
doesn't have unexpected side-effects?


Sure I’ll do this - thanks!


Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki  
a écrit :

Adrien,

If it’s ok I would also like to merge SOLR-11458, this significantly reduces 
the chance of accidental data loss when using MoveReplicaCmd.


On 5 Dec 2017, at 14:44, Adrien Grand  wrote:
Quick update:

LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been merged, they will 
be in 7.2.

LUCENE-8048 and SOLR-11624 are still open. 

LUCENE-8048 looks like it could make things better in some cases but I don't 
think it is required for 7.2, so I don't plan to hold the release on it.

SOLR-11624 looks bad, I'll wait for it.

Le mar. 5 déc. 2017 à 07:45, Noble Paul  a écrit :

+1 this is over due

On Dec 4, 2017 16:38, "Varun Thacker"  wrote:

+1 Adrien ! Thanks for taking it up

On Fri, Dec 1, 2017 at 9:05 AM, Christine Poerschke (BLOOMBERG/ QUEEN VIC) 
 wrote:

I'd like to see https://issues.apache.org/jira/browse/SOLR-9137 included in the 
release. Hoping to commit it on/by Tuesday.

Christine

From: dev@lucene.apache.org At: 12/01/17 10:11:59To:  dev@lucene.apache.org
Subject: Lucene/Solr 7.2

Hello,

It's been more than 6 weeks since we released 7.1 and we accumulated a good set 
of changes, so I think we should release Lucene/Solr 7.2.0.

There is one change that I would like to have before building a RC: 
LUCENE-8043[1], which looks like it is almost ready to be merged. Please let me 
know if there are any other changes that should make it to the release.

I volunteer to be the release manager. I'm currently thinking of building the 
first release candidate next wednesday, December 6th.

[1] https://issues.apache.org/jira/browse/LUCENE-8043




[jira] [Commented] (SOLR-11078) Solr query performance degradation since Solr 6.4.2

2017-12-06 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280813#comment-16280813
 ] 

Markus Jelsma commented on SOLR-11078:
--

I agree with Yonik, Strings are no substitute for numbers in some or many 
cases. We have numeric fields that we use for exact queries, ranges and in some 
cases arithmetic in boosts.

It would be bad if in the future, with Trie* definitively gone, we would have 
to index the same value as Point, and as String.

We know about this, novice users don't. 

> Solr query performance degradation since Solr 6.4.2
> ---
>
> Key: SOLR-11078
> URL: https://issues.apache.org/jira/browse/SOLR-11078
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search, Server
>Affects Versions: 6.6, 7.1
> Environment: * CentOS 7.3 (Linux zasolrm03 3.10.0-514.26.2.el7.x86_64 
> #1 SMP Tue Jul 4 15:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux)
> * Java HotSpot(TM) 64-Bit Server VM (build 25.121-b13, mixed mode)
> * 4 CPU, 10GB RAM
> Running Solr 6.6.0 with the following JVM settings:
> java -server -Xms4G -Xmx4G -XX:NewRatio=3 -XX:SurvivorRatio=4 
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8 -XX:+UseConcMarkSweepGC 
> -XX:+UseParNewGC -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 
> -XX:+CMSScavengeBeforeRemark -XX:PretenureSizeThreshold=64m 
> -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=50 
> -XX:CMSMaxAbortablePrecleanTime=6000 -XX:+CMSParallelRemarkEnabled 
> -XX:+ParallelRefProcEnabled -verbose:gc -XX:+PrintHeapAtGC 
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps 
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime 
> -Xloggc:/home/prodza/solrserver/../logs/solr_gc.log -XX:+UseGCLogFileRotation 
> -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M 
> -Dsolr.log.dir=/home/prodza/solrserver/../logs -Djetty.port=8983 
> -DSTOP.PORT=7983 -DSTOP.KEY=solrrocks -Duser.timezone=SAST 
> -Djetty.home=/home/prodza/solrserver/server 
> -Dsolr.solr.home=/home/prodza/solrserver/../solr 
> -Dsolr.install.dir=/home/prodza/solrserver 
> -Dlog4j.configuration=file:/home/prodza/solrserver/../config/log4j.properties 
> -Xss256k -Xss256k -Dsolr.log.muteconsole 
> -XX:OnOutOfMemoryError=/home/prodza/solrserver/bin/oom_solr.sh 8983 
> /home/prodza/solrserver/../logs -jar start.jar --module=http
>Reporter: bidorbuy
> Attachments: compare-6.4.2-6.6.0.png, core-admin-tradesearch.png, 
> jvm-stats.png, schema.xml, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, solr-6-4-2-schema.xml, solr-6-4-2-solrconfig.xml, 
> solr-7-1-0-managed-schema, solr-7-1-0-solrconfig.xml, solr-71-vs-64.png, 
> solr-sample-warning-log.txt, solr.in.sh, solrconfig.xml
>
>
> We are currently running 2 separate Solr servers - refer to screenshots:
> * zasolrm02 is running on Solr 6.4.2
> * zasolrm03 is running on Solr 6.6.0
> Both servers have the same OS / JVM configuration and are using their own 
> indexes. We round-robin load-balance through our Tomcats and notice that 
> Since Solr 6.4.2 performance has dropped. We have two indices per server 
> "searchsuggestions" and "tradesearch". There is a noticeable drop in 
> performance since Solr 6.4.2.
> I am not sure if this is perhaps related to metric collation or other 
> underlying changes. I am not sure if other high transaction users have 
> noticed similar issues.
> *1) zasolrm03 (6.6.0) is almost twice as slow on the tradesearch index:*
> !compare-6.4.2-6.6.0.png!
> *2) This is also visible in the searchsuggestion index:*
> !screenshot-1.png!
> *3) The Tradesearch index shows the biggest difference:*
> !screenshot-2.png!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2017-12-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280799#comment-16280799
 ] 

Anshum Gupta commented on SOLR-11277:
-

[~rupsshankar] One thing that would be good to add here as a test is to just 
send redundant deletes as delete by ID for the same doc say 5k times and see 
the auto-commit be triggered.

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
> Attachments: max_size_auto_commit.patch
>
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9743) An UTILIZENODE command

2017-12-06 Thread Christine Poerschke (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280797#comment-16280797
 ] 

Christine Poerschke commented on SOLR-9743:
---

Thanks Adrien for removing the 8.0 changes from the 7.x change log. I hadn't 
even seen the 8.0 changes :-) but meant difference edits::
* 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/CHANGES.txt;h=38ed4ba5c9ade014ab4db33d2850a28f49acc98e;hp=d5b953dad5359e3bdfc285bbd3ebf07d10553ee1;hb=c62d538;hpb=c51e34905037a44347530304d2be5b23e7095348
 added the (now removed 8.0 section) but it also removed a bunch of other 
tickets in the 7.2.0 section (and seems to also edit the 7.1.0 section even, I 
thought that would never change post release?).
* 
https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;a=blobdiff;f=solr/CHANGES.txt;h=19dc7c5dac0056084367f171bb39c855b35755fc;hp=38ed4ba5c9ade014ab4db33d2850a28f49acc98e;hb=093f6c2;hpb=c62d5384d2393d861b0ae498b094de80eb0caee6
 reinstates only three of the removed tickets.


> An UTILIZENODE command
> --
>
> Key: SOLR-9743
> URL: https://issues.apache.org/jira/browse/SOLR-9743
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.2
>
>
> The command would accept one or more nodes and create appropriate replicas 
> based on some strategy.
> The params are
>  *node: (required && multi-valued) : The nodes to be deployed 
>  * collection: (optional) The collection to which the node should be added 
> to. if this parameter is not passed, try to assign to all collections
> example:
> {code}
> action=UTILIZENODE=gettingstarted
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280784#comment-16280784
 ] 

ASF subversion and git services commented on SOLR-11126:


Commit 59df6026ad0fdbaef186739fcd827b302bfed9bb in lucene-solr's branch 
refs/heads/branch_7_2 from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=59df602 ]

SOLR-11126: Remove wrong change log entry from 7.1 section


> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.2

2017-12-06 Thread Adrien Grand
Woah, twice the same mistake with different people. Sorry about that!

Le mer. 6 déc. 2017 à 20:28, Anshum Gupta  a écrit :

> You meant Ishan here, right ? :)
>
> -Anshum
>
>
>
> On Dec 6, 2017, at 5:02 AM, Adrien Grand  wrote:
>
> Thanks for the heads up, Anshum.
>
> This leaves us with only SOLR-11458 to wait for before building a RC
> (which might be ready but just not marked as resolved).
>
>
> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
>
>> Hi Adrien,
>> I'm planning to skip SOLR-11624 for this release (as per my last comment
>> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121).
>> If someone has an objection, please let me know; otherwise, please feel
>> free to proceed with the release.
>> I'll continue working on it anyway, and shall try to have it ready for
>> the next release.
>> Thanks,
>> Ishan
>>
>> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  wrote:
>>
>>> FYI I created the new branch for 7.2, so you will have to backport to
>>> this branch. No hurry though, I mostly created the branch so that it's fine
>>> to cherry-pick changes that may wait for 7.3 to be released.
>>>
>>> Le mer. 6 déc. 2017 à 08:53, Adrien Grand  a écrit :
>>>
 Sorry to hear that Ishan, I hope you are doing better now. +1 to get
 SOLR-11624 in.

 Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya <
 ichattopadhy...@gmail.com> a écrit :

> I was a bit unwell over the weekend and yesterday; I'm working on a
> very targeted fix for SOLR-11624 right now; I expect it to take another 
> 5-6
> hours.
> Is that fine with you, Adrien? If not, please go ahead with the
> release, and I'll volunteer later for a bugfix release for this after 7.2
> is out.
>
> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand 
> wrote:
>
>> Fine with me.
>>
>> Le mar. 5 déc. 2017 à 22:34, Varun Thacker  a
>> écrit :
>>
>>> Hi Adrien,
>>>
>>> I'd like to commit SOLR-11590 . The issue had a patch couple of
>>> weeks ago and has been reviewed but never got committed. I've run all 
>>> the
>>> tests twice as well to verify.
>>>
>>> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki <
>>> andrzej.biale...@lucidworks.com> wrote:
>>>

 On 5 Dec 2017, at 18:05, Adrien Grand  wrote:

 Andrzej, ok to merge since it is a bug fix. Since we're close to
 the RC build, maybe try to get someone familiar with the code to 
 review it
 to make sure it doesn't have unexpected side-effects?


 Sure I’ll do this - thanks!


 Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki <
 andrzej.biale...@lucidworks.com> a écrit :

> Adrien,
>
> If it’s ok I would also like to merge SOLR-11458, this
> significantly reduces the chance of accidental data loss when using
> MoveReplicaCmd.
>
> On 5 Dec 2017, at 14:44, Adrien Grand  wrote:
>
> Quick update:
>
> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been
> merged, they will be in 7.2.
>
> LUCENE-8048 and SOLR-11624 are still open.
>
> LUCENE-8048 looks like it could make things better in some cases
> but I don't think it is required for 7.2, so I don't plan to hold the
> release on it.
>
> SOLR-11624 looks bad, I'll wait for it.
>
> Le mar. 5 déc. 2017 à 07:45, Noble Paul  a
> écrit :
>
>> +1 this is over due
>>
>> On Dec 4, 2017 16:38, "Varun Thacker"  wrote:
>>
>> +1 Adrien ! Thanks for taking it up
>>
>> On Fri, Dec 1, 2017 at 9:05 AM, Christine Poerschke (BLOOMBERG/
>> QUEEN VIC)  wrote:
>>
>>> I'd like to see https://issues.apache.org/jira/browse/SOLR-9137
>>> included in the release. Hoping to commit it on/by Tuesday.
>>>
>>> Christine
>>>
>>> From: dev@lucene.apache.org At: 12/01/17 10:11:59
>>> To: dev@lucene.apache.org
>>> Subject: Lucene/Solr 7.2
>>>
>>> Hello,
>>>
>>> It's been more than 6 weeks since we released 7.1 and we
>>> accumulated a good set of changes, so I think we should release 
>>> Lucene/Solr
>>> 7.2.0.
>>>
>>> There is one change that I would like to have before building a
>>> RC: LUCENE-8043[1], which looks like it is almost ready to be 
>>> merged.
>>> Please let me know if 

Re: branch_7_2 created

2017-12-06 Thread Anshum Gupta
I’m going to go ahead and remove the wrong change log entry from the 7.1 
section for SOLR-11126. That commit never made it to a release and I shouldn’t 
have added it there.
I’ve already cleaned up master and branch_7x but I’ll also do that for the 7.2 
branch.

-Anshum



> On Dec 6, 2017, at 8:44 AM, Adrien Grand  wrote:
> 
> Thank you for the note Christine. It looks like it was added by mistake when 
> backporting SOLR-9743. I will remove it.
> 
> Le mer. 6 déc. 2017 à 17:12, Cassandra Targett  > a écrit :
> I just noticed that 'solr/CHANGES.txt' on branch_7_2 still has a section for 
> 8.0.0 (but 'lucene/CHANGES.txt' is fine) - did one of the 
> "make-a-release-branch" steps not finish correctly perhaps?
> 
> On Wed, Dec 6, 2017 at 9:18 AM, Steve Rowe  > wrote:
> Hi Adrien,
> 
> I set up 7.2 jobs on ASF Jenkins.
> 
> --
> Steve
> www.lucidworks.com 
> 
> > On Dec 6, 2017, at 5:53 AM, Uwe Schindler  > > wrote:
> >
> > Hi,
> >
> > Will do Policeman in the next meeting break!
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > Achterdiek 19 
> > , D-28357 
> > Bremen
> > http://www.thetaphi.de 
> > eMail: u...@thetaphi.de 
> >
> > From: Adrien Grand [mailto:jpou...@gmail.com ]
> > Sent: Wednesday, December 6, 2017 10:02 AM
> > To: Lucene Dev >
> > Subject: Re: branch_7_2 created
> >
> > Uwe, Steve, could you help me set up Jenkins jobs for the 7.2 branch?
> >
> > Le mer. 6 déc. 2017 à 10:08, Adrien Grand  > > a écrit :
> >> Hi all,
> >>
> >> I just created the 7.2 branch. You may now merge again against branch_7x 
> >> for changes that can wait for 7.3 to be released. If you are working on a 
> >> 7.2 bugfix, do not forget to backport to this new branch.
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP


[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 330 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/330/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestMultiMMap

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\tempDir-006

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testImplementations-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testImplementations-006

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testSeekZero-007:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testSeekZero-007
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\tempDir-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\tempDir-006
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testImplementations-006:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testImplementations-006
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testSeekZero-007:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.store.TestMultiMMap_D4B5C86CD2D0FF53-001\testSeekZero-007

at __randomizedtesting.SeedInfo.seed([D4B5C86CD2D0FF53]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.analytics.value.CastingBooleanValueTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.value.CastingBooleanValueTest_CEA754D47C3F832F-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.value.CastingBooleanValueTest_CEA754D47C3F832F-001\init-core-data-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.value.CastingBooleanValueTest_CEA754D47C3F832F-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.value.CastingBooleanValueTest_CEA754D47C3F832F-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-analytics\test\J0\temp\solr.analytics.value.CastingBooleanValueTest_CEA754D47C3F832F-001\init-core-data-001:
 

[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280772#comment-16280772
 ] 

ASF subversion and git services commented on SOLR-11126:


Commit b4d5ea33ad41c30b6684c9adbf4bdd919d46cd7b in lucene-solr's branch 
refs/heads/branch_7x from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b4d5ea3 ]

SOLR-11126: Remove wrong change log entry from 7.1 section


> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8081:

Attachment: LUCENE-8081.patch

new patch with updated javadocs 

> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
> Attachments: LUCENE-8081.patch, LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280760#comment-16280760
 ] 

ASF subversion and git services commented on SOLR-11126:


Commit cd30dabe37e1a9dce06ada6d4c46ad137add2b6c in lucene-solr's branch 
refs/heads/master from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd30dab ]

SOLR-11126: Remove wrong change log entry from 7.1 section


> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.2

2017-12-06 Thread Anshum Gupta
You meant Ishan here, right ? :)

-Anshum



> On Dec 6, 2017, at 5:02 AM, Adrien Grand  wrote:
> 
> Thanks for the heads up, Anshum.
> 
> This leaves us with only SOLR-11458 to wait for before building a RC (which 
> might be ready but just not marked as resolved).
> 
> 
> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya  > a écrit :
> Hi Adrien,
> I'm planning to skip SOLR-11624 for this release (as per my last comment 
> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121
>  
> ).
>  If someone has an objection, please let me know; otherwise, please feel free 
> to proceed with the release.
> I'll continue working on it anyway, and shall try to have it ready for the 
> next release.
> Thanks,
> Ishan
> 
> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand  > wrote:
> FYI I created the new branch for 7.2, so you will have to backport to this 
> branch. No hurry though, I mostly created the branch so that it's fine to 
> cherry-pick changes that may wait for 7.3 to be released.
> 
> Le mer. 6 déc. 2017 à 08:53, Adrien Grand  > a écrit :
> Sorry to hear that Ishan, I hope you are doing better now. +1 to get 
> SOLR-11624 in.
> 
> Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya  > a écrit :
> I was a bit unwell over the weekend and yesterday; I'm working on a very 
> targeted fix for SOLR-11624 right now; I expect it to take another 5-6 hours.
> Is that fine with you, Adrien? If not, please go ahead with the release, and 
> I'll volunteer later for a bugfix release for this after 7.2 is out.
> 
> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand  > wrote:
> Fine with me.
> 
> 
> Le mar. 5 déc. 2017 à 22:34, Varun Thacker  > a écrit :
> Hi Adrien,
> 
> I'd like to commit SOLR-11590 . The issue had a patch couple of weeks ago and 
> has been reviewed but never got committed. I've run all the tests twice as 
> well to verify.
> 
> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki 
> > 
> wrote:
> 
>> On 5 Dec 2017, at 18:05, Adrien Grand > > wrote:
>> 
>> Andrzej, ok to merge since it is a bug fix. Since we're close to the RC 
>> build, maybe try to get someone familiar with the code to review it to make 
>> sure it doesn't have unexpected side-effects?
> 
> Sure I’ll do this - thanks!
> 
>> 
>> Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki 
>> > a 
>> écrit :
>> Adrien,
>> 
>> If it’s ok I would also like to merge SOLR-11458, this significantly reduces 
>> the chance of accidental data loss when using MoveReplicaCmd.
>> 
>>> On 5 Dec 2017, at 14:44, Adrien Grand >> > wrote:
>>> 
>>> Quick update:
>>> 
>>> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been merged, they 
>>> will be in 7.2.
>>> 
>>> LUCENE-8048 and SOLR-11624 are still open.
>>> 
>>> LUCENE-8048 looks like it could make things better in some cases but I 
>>> don't think it is required for 7.2, so I don't plan to hold the release on 
>>> it.
>>> 
>>> SOLR-11624 looks bad, I'll wait for it.
>>> 
>>> Le mar. 5 déc. 2017 à 07:45, Noble Paul >> > a écrit :
>>> +1 this is over due
>>> 
>>> On Dec 4, 2017 16:38, "Varun Thacker" >> > wrote:
>>> +1 Adrien ! Thanks for taking it up
>>> 
>>> On Fri, Dec 1, 2017 at 9:05 AM, Christine Poerschke (BLOOMBERG/ QUEEN VIC) 
>>> > wrote:
>>> I'd like to see https://issues.apache.org/jira/browse/SOLR-9137 
>>>  included in the release. 
>>> Hoping to commit it on/by Tuesday.
>>> 
>>> Christine
>>> 
>>> From: dev@lucene.apache.org  At: 12/01/17 
>>> 10:11:59
>>> To:  dev@lucene.apache.org 
>>> Subject: Lucene/Solr 7.2
>>> Hello,
>>> 
>>> It's been more than 6 weeks since we released 7.1 and we accumulated a good 
>>> set of changes, so I think we should release Lucene/Solr 7.2.0.
>>> 
>>> There is one change that I would like to have before building a RC: 
>>> LUCENE-8043[1], which looks like it is almost ready to be merged. Please 
>>> let me know if there are any other changes that should make it to the 
>>> release.
>>> 
>>> I 

Re: branch_7_2 created

2017-12-06 Thread Anshum Gupta
Adrien, you meant “Cassandra", right? :)

-Anshum



> On Dec 6, 2017, at 8:44 AM, Adrien Grand  wrote:
> 
> Thank you for the note Christine. It looks like it was added by mistake when 
> backporting SOLR-9743. I will remove it.
> 
> Le mer. 6 déc. 2017 à 17:12, Cassandra Targett  > a écrit :
> I just noticed that 'solr/CHANGES.txt' on branch_7_2 still has a section for 
> 8.0.0 (but 'lucene/CHANGES.txt' is fine) - did one of the 
> "make-a-release-branch" steps not finish correctly perhaps?
> 
> On Wed, Dec 6, 2017 at 9:18 AM, Steve Rowe  > wrote:
> Hi Adrien,
> 
> I set up 7.2 jobs on ASF Jenkins.
> 
> --
> Steve
> www.lucidworks.com 
> 
> > On Dec 6, 2017, at 5:53 AM, Uwe Schindler  > > wrote:
> >
> > Hi,
> >
> > Will do Policeman in the next meeting break!
> >
> > Uwe
> >
> > -
> > Uwe Schindler
> > Achterdiek 19 
> > , D-28357 
> > Bremen
> > http://www.thetaphi.de 
> > eMail: u...@thetaphi.de 
> >
> > From: Adrien Grand [mailto:jpou...@gmail.com ]
> > Sent: Wednesday, December 6, 2017 10:02 AM
> > To: Lucene Dev >
> > Subject: Re: branch_7_2 created
> >
> > Uwe, Steve, could you help me set up Jenkins jobs for the 7.2 branch?
> >
> > Le mer. 6 déc. 2017 à 10:08, Adrien Grand  > > a écrit :
> >> Hi all,
> >>
> >> I just created the 7.2 branch. You may now merge again against branch_7x 
> >> for changes that can wait for 7.3 to be released. If you are working on a 
> >> 7.2 bugfix, do not forget to backport to this new branch.
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP


[jira] [Comment Edited] (SOLR-11126) Node-level health check handler

2017-12-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280672#comment-16280672
 ] 

Anshum Gupta edited comment on SOLR-11126 at 12/6/17 7:21 PM:
--

[~ctargett] sorry about missing the comments here. Yes, it shouldn't have been 
in 7.1 as this isn't back ported yet. I'll work on it this week and wrap it up.

I'm not sure about what's a good way to fix it now that 7.1 has been released. 
I do feel like removing it from the change log from master so it doesn't make 
it to the 7.2 log. Suggestions?


was (Author: anshumg):
[~ctargett] sorry about missing the comments here. Yes, it shouldn't have been 
in 7.1 as this isn't back ported yet. I'll work on it this week and wrap it up.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2017-12-06 Thread arjenm
Github user arjenm commented on the pull request:


https://github.com/apache/lucene-solr/commit/962313b83ba9c69379e1f84dffc881a361713ce9#commitcomment-26095567
  
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java:
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java on line 731:
Unfortunately, since I don't have the lucene-solr project checked out/set 
up, submitting a proper patch is a bit too large of an effort. Luckily the 
change I made was really small, so hopefully you can do it? :)

I can live with missing the chance of the honor of submitting a patch to 
Lucene ;)

In my first comment, I've mentioned the fix, but here it is in a bit more 
detail:
Replace this line in 
SolrPluginUtils#flattenBooleanQuery(BooleanQuery.Builder, BooleanQuery, float)

https://github.com/apache/lucene-solr/blob/187849f9b67ba6b7e6c2d06cc25359bf53b2ce9f/solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java#L747

With the properly formatted version of this:
`if (boost != 1f) {
   to.add(new BoostQuery(cq, boost), clause.getOccur());
} else {
  to.add(clause);
}`



---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+32) - Build # 949 - Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/949/
Java: 64bit/jdk-10-ea+32 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamingTest

Error Message:
Error from server at http://127.0.0.1:39689/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:39689/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([3680B28B37020435]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.io.stream.StreamingTest.configureCluster(StreamingTest.java:102)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.AddReplicaTest.test

Error Message:
core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"http://127.0.0.1:44493/solr","node_name":"127.0.0.1:44493_solr","state":"active","type":"NRT"}

Stack Trace:
java.lang.AssertionError: 
core_node6:{"core":"addreplicatest_coll_shard1_replica_n5","base_url":"http://127.0.0.1:44493/solr","node_name":"127.0.0.1:44493_solr","state":"active","type":"NRT"}
at 
__randomizedtesting.SeedInfo.seed([3B6D0AB9EA4AA53:8BE2EF713058C7AB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.AddReplicaTest.test(AddReplicaTest.java:84)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-12-06 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280672#comment-16280672
 ] 

Anshum Gupta commented on SOLR-11126:
-

[~ctargett] sorry about missing the comments here. Yes, it shouldn't have been 
in 7.1 as this isn't back ported yet. I'll work on it this week and wrap it up.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11711) Improve mincount & limit usage in pivot & field facets

2017-12-06 Thread Houston Putman (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280669#comment-16280669
 ] 

Houston Putman commented on SOLR-11711:
---

Hey [~hossman], any thoughts on this patch? Particularly the field facet part. 
You mentioned in [this 
comment|https://issues.apache.org/jira/browse/SOLR-8988?focusedCommentId=15241993=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15241993]
 why you would be wary of changing the behavior due to the comment in the code. 
[~k317h] and I believe that the fixing the {{last = Math.max(0, initialMincount 
- 1);}} line will address why anyone was seeing performance degredation with 
the {{facet.distrib.mco}} option given. And we can't find another reason why 
additional refinement would be needed.

> Improve mincount & limit usage in pivot & field facets
> --
>
> Key: SOLR-11711
> URL: https://issues.apache.org/jira/browse/SOLR-11711
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: faceting
>Affects Versions: master (8.0)
>Reporter: Houston Putman
>  Labels: pull-request-available
> Fix For: 5.6, 6.7, 7.2
>
>
> Currently while sending pivot facet requests to each shard, the 
> {{facet.pivot.mincount}} is set to {{0}} if the facet is sorted by count with 
> a specified limit > 0. However with a mincount of 0, the pivot facet will use 
> exponentially more wasted memory for every pivot field added. This is because 
> there will be a total of {{limit^(# of pivots)}} pivot values created in 
> memory, even though the vast majority of them will have counts of 0, and are 
> therefore useless.
> Imagine the scenario of a pivot facet with 3 levels, and 
> {{facet.limit=1000}}. There will be a billion pivot values created, and there 
> will almost definitely be nowhere near a billion pivot values with counts > 0.
> This likely due to the reasoning mentioned in [this comment in the original 
> distributed pivot facet 
> ticket|https://issues.apache.org/jira/browse/SOLR-2894?focusedCommentId=13979898].
>  Basically it was thought that the refinement code would need to know that a 
> count was 0 for a shard so that a refinement request wasn't sent to that 
> shard. However this is checked in the code, [in this part of the refinement 
> candidate 
> checking|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.1.0/solr/core/src/java/org/apache/solr/handler/component/PivotFacetField.java#L275].
>  Therefore if the {{pivot.mincount}} was set to 1, the non-existent values 
> would either:
> * Not be known, because the {{facet.limit}} was smaller than the number of 
> facet values with positive counts. This isn't an issue, because they wouldn't 
> have been returned with {{pivot.mincount}} set to 0.
> * Would be known, because the {{facet.limit}} would be larger than the number 
> of facet values returned. therefore this conditional would return false 
> (since we are only talking about pivot facets sorted by count).
> The solution, is to use the same pivot mincount as would be used if no limit 
> was specified. 
> This also relates to a similar problem in field faceting that was "fixed" in 
> [SOLR-8988|https://issues.apache.org/jira/browse/SOLR-8988#13324]. The 
> solution was to add a flag, {{facet.distrib.mco}}, which would enable not 
> choosing a mincount of 0 when unnessesary. Since this flag can only increase 
> performance, and doesn't break any queries I have removed it as an option and 
> replaced the code to use the feature always. 
> There was one code change necessary to fix the MCO option, since the 
> refinement candidate selection logic had a bug. The bug only occured with a 
> minCount > 0 and limit > 0 specified. When a shard replied with less than the 
> limit requested, it would assume the next maximum count on that shard was the 
> {{mincount}}, where it would actually be the {{mincount-1}} (because a facet 
> value with a count of mincount would have been returned). Therefore the MCO 
> didn't cause any errors, but with a mincount of 1 the refinement logic always 
> assumed that the shard had more values with a count of 1.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2142: POMs out of sync

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2142/

No tests ran.

Build Log:
[...truncated 17146 lines...]
  [mvn] # There is insufficient memory for the Java Runtime Environment to 
continue.
  [mvn] # Native memory allocation (mmap) failed to map 297795584 bytes for 
committing reserved memory.
  [mvn] # An error report file with more information is saved as:
  [mvn] # 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/hs_err_pid2972.log

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:860: The 
following error occurred while executing this line:
: Java returned: 1

Total time: 23 minutes 40 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-11730) Test NodeLost / NodeAdded dynamics

2017-12-06 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-11730:


 Summary: Test NodeLost / NodeAdded dynamics
 Key: SOLR-11730
 URL: https://issues.apache.org/jira/browse/SOLR-11730
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Andrzej Bialecki 


Let's consider a "flaky node" scenario.

A node is going up and down at short intervals (eg. due to a flaky network 
cable). If the frequency of these events coincides with {{waitFor}} interval in 
{{nodeLost}} trigger configuration, the node may never be reported to the 
autoscaling framework as lost. Similarly it may never be reported as added back 
if it's lost again within the {{waitFor}} period of {{nodeAdded}} trigger.

Other scenarios are possible here too, depending on timing:
* node being constantly reported as lost
* node being constantly reported as added

One possible solution for the autoscaling triggers is that the framework should 
keep a short-term ({{waitFor * 2}} long?) memory of a node state that the 
trigger is tracking in order to eliminate flaky nodes (ie. those that 
transitioned between states more than once within the period).

Situation like this is detrimental to SolrCloud behavior regardless of 
autoscaling actions, so it should probably be addressed at a node level by eg. 
shutting down Solr node after the number of disconnects in a time window 
reaches a certain threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2208 - Still Failing

2017-12-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2208/

1 tests failed.
FAILED:  org.apache.solr.handler.component.DistributedTermsComponentTest.test

Error Message:
Error from server at https://127.0.0.1:42100//collection1: Expected mime type 
application/octet-stream but got text/html.Error 
500HTTP ERROR: 500 Problem accessing 
/collection1/terms. Reason: java.lang.OutOfMemoryError: unable to 
create new native thread http://eclipse.org/jetty;>Powered by Jetty:// 9.3.20.v20170531 
  

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42100//collection1: Expected mime type 
application/octet-stream but got text/html. 


Error 500 


HTTP ERROR: 500
Problem accessing /collection1/terms. Reason:
java.lang.OutOfMemoryError: unable to create new native 
thread
http://eclipse.org/jetty;>Powered by Jetty:// 
9.3.20.v20170531



at 
__randomizedtesting.SeedInfo.seed([A7D47EB2A02B4B1:8229783184FED949]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase.queryServer(BaseDistributedSearchTestCase.java:557)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:605)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:587)
at 
org.apache.solr.BaseDistributedSearchTestCase.query(BaseDistributedSearchTestCase.java:566)
at 
org.apache.solr.handler.component.DistributedTermsComponentTest.test(DistributedTermsComponentTest.java:52)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[jira] [Created] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-06 Thread Hoss Man (JIRA)
Hoss Man created SOLR-11729:
---

 Summary: Increase default overrequest ratio/count in json.facet to 
match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?
 Key: SOLR-11729
 URL: https://issues.apache.org/jira/browse/SOLR-11729
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hoss Man


When FacetComponent first got support for distributed search, the default 
"effective shard limit" done on shards followed the formula...

{code}
limit = (int)(dff.initialLimit * 1.5) + 10;
{code}

...over time, this became configurable with the introduction of some expert 
level tuning options: {{facet.overrequest.ratio}} & {{facet.overrequest.count}} 
-- but the defaults (and basic formula) remain the same to this day...

{code}
  this.overrequestRatio
= params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
1.5);
  this.overrequestCount 
= params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
...
  private int doOverRequestMath(int limit, double ratio, int count) {
// NOTE: normally, "1.0F < ratio"
//
// if the user chooses a ratio < 1, we allow it and don't "bottom out" at
// the original limit until *after* we've also added the count.
int adjustedLimit = (int) (limit * ratio) + count;
return Math.max(limit, adjustedLimit);
  }
{code}

However...


When {{json.facet}} multi-shard refinement was added, the code was written 
slightly diff:

* there is an explicit {{overrequest:N}} (count) option
* if {{-1 == overrequest}} (which is the default) then an "effective shard 
limit" is computed using the same basic formula as in FacetComponet -- _*but 
the constants are different*_...
** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
* For any (non "-1") user specified {{overrequest}} value, it's added verbatim 
to the {{limit}} (which may have been user specified, or may just be the 
default)
** {{effectiveLimit += freq.overrequest;}}


Given the design of the {{json.facet}} syntax, I can understand why the code 
path for an "advanced" user specified {{overrequest:N}} option avoids using any 
(implicit) ratio calculation and just does the straight addition of {{limit += 
overrequest}}.

What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
common (default) case, and why those differ from the historically used {{1.5}} 
and {{6}}.



It may seem like a small thing to worry about, but it can/will cause odd 
inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
{{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it give 
people attempting these types of migrations the (mistaken) impression that 
discrepancies they are seeing are because {{refine:true}} is not be working.

For this reason, I propose we change the (default) {{overrequest:-1}} behavior 
to use the same constants as the equivilent FacetComponent code...

{code}
if (fcontext.isShard()) {
  if (freq.overrequest == -1) {
// add over-request if this is a shard request and if we have a small 
offset (large offsets will already be gathering many more buckets than needed)
if (freq.offset < 10) {
  effectiveLimit = (long) (effectiveLimit * 1.5 + 6);
}
...
{code}




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11729) Increase default overrequest ratio/count in json.facet to match existing defaults for facet.overrequest.ratio & facet.overrequest.count ?

2017-12-06 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11729?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280631#comment-16280631
 ] 

Hoss Man commented on SOLR-11729:
-

[~yo...@apache.org]: do you remember if there was the an explicit reason you 
choose those lower constants in the json.facet code?

> Increase default overrequest ratio/count in json.facet to match existing 
> defaults for facet.overrequest.ratio & facet.overrequest.count ?
> -
>
> Key: SOLR-11729
> URL: https://issues.apache.org/jira/browse/SOLR-11729
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> When FacetComponent first got support for distributed search, the default 
> "effective shard limit" done on shards followed the formula...
> {code}
> limit = (int)(dff.initialLimit * 1.5) + 10;
> {code}
> ...over time, this became configurable with the introduction of some expert 
> level tuning options: {{facet.overrequest.ratio}} & 
> {{facet.overrequest.count}} -- but the defaults (and basic formula) remain 
> the same to this day...
> {code}
>   this.overrequestRatio
> = params.getFieldDouble(field, FacetParams.FACET_OVERREQUEST_RATIO, 
> 1.5);
>   this.overrequestCount 
> = params.getFieldInt(field, FacetParams.FACET_OVERREQUEST_COUNT, 10);
> ...
>   private int doOverRequestMath(int limit, double ratio, int count) {
> // NOTE: normally, "1.0F < ratio"
> //
> // if the user chooses a ratio < 1, we allow it and don't "bottom out" at
> // the original limit until *after* we've also added the count.
> int adjustedLimit = (int) (limit * ratio) + count;
> return Math.max(limit, adjustedLimit);
>   }
> {code}
> However...
> When {{json.facet}} multi-shard refinement was added, the code was written 
> slightly diff:
> * there is an explicit {{overrequest:N}} (count) option
> * if {{-1 == overrequest}} (which is the default) then an "effective shard 
> limit" is computed using the same basic formula as in FacetComponet -- _*but 
> the constants are different*_...
> ** {{effectiveLimit = (long) (effectiveLimit * 1.1 + 4);}}
> * For any (non "-1") user specified {{overrequest}} value, it's added 
> verbatim to the {{limit}} (which may have been user specified, or may just be 
> the default)
> ** {{effectiveLimit += freq.overrequest;}}
> Given the design of the {{json.facet}} syntax, I can understand why the code 
> path for an "advanced" user specified {{overrequest:N}} option avoids using 
> any (implicit) ratio calculation and just does the straight addition of 
> {{limit += overrequest}}.
> What I'm not clear on is the choice of the constants {{1.1}} and {{4}} in the 
> common (default) case, and why those differ from the historically used 
> {{1.5}} and {{6}}.
> 
> It may seem like a small thing to worry about, but it can/will cause odd 
> inconsistencies when people try to migrate simple {{facet.field=foo}} (or 
> {{facet.pivot=foo,bar}}) queries to {{json.facet}} -- I have also seen it 
> give people attempting these types of migrations the (mistaken) impression 
> that discrepancies they are seeing are because {{refine:true}} is not be 
> working.
> For this reason, I propose we change the (default) {{overrequest:-1}} 
> behavior to use the same constants as the equivilent FacetComponent code...
> {code}
> if (fcontext.isShard()) {
>   if (freq.overrequest == -1) {
> // add over-request if this is a shard request and if we have a small 
> offset (large offsets will already be gathering many more buckets than needed)
> if (freq.offset < 10) {
>   effectiveLimit = (long) (effectiveLimit * 1.5 + 6);
> }
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2017-12-06 Thread jpountz
Github user jpountz commented on the pull request:


https://github.com/apache/lucene-solr/commit/962313b83ba9c69379e1f84dffc881a361713ce9#commitcomment-26094073
  
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java:
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java on line 731:
Oh I see now, good catch. Would you like to submit a patch?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure

2017-12-06 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280612#comment-16280612
 ] 

Adrien Grand commented on LUCENE-8015:
--

Done, I combined both patches and beasting didn't find any failures so I 
merged. Thank you!

> TestBasicModelIne.testRandomScoring failure
> ---
>
> Key: LUCENE-8015
> URL: https://issues.apache.org/jira/browse/LUCENE-8015
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: master (8.0)
>
> Attachments: LUCENE-8015-test.patch, LUCENE-8015.patch, 
> LUCENE-8015_test_fangs.patch
>
>
> reproduce with: ant test  -Dtestcase=TestBasicModelIne 
> -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 
> -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure

2017-12-06 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8015.
--
   Resolution: Fixed
Fix Version/s: master (8.0)

> TestBasicModelIne.testRandomScoring failure
> ---
>
> Key: LUCENE-8015
> URL: https://issues.apache.org/jira/browse/LUCENE-8015
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Fix For: master (8.0)
>
> Attachments: LUCENE-8015-test.patch, LUCENE-8015.patch, 
> LUCENE-8015_test_fangs.patch
>
>
> reproduce with: ant test  -Dtestcase=TestBasicModelIne 
> -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 
> -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7996) Should we require positive scores?

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280611#comment-16280611
 ] 

ASF subversion and git services commented on LUCENE-7996:
-

Commit 187849f9b67ba6b7e6c2d06cc25359bf53b2ce9f in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=187849f ]

LUCENE-7996: PayloadScoreQuery must produce positive scores.


> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7996.patch, LUCENE-7996.patch, LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure

2017-12-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280590#comment-16280590
 ] 

ASF subversion and git services commented on LUCENE-8015:
-

Commit 63b63c573487fe6b054afb6073c057a88a15288f in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=63b63c5 ]

LUCENE-8015: Fixed DFR similarities' scores to not decrease when tfn increases.


> TestBasicModelIne.testRandomScoring failure
> ---
>
> Key: LUCENE-8015
> URL: https://issues.apache.org/jira/browse/LUCENE-8015
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Attachments: LUCENE-8015-test.patch, LUCENE-8015.patch, 
> LUCENE-8015_test_fangs.patch
>
>
> reproduce with: ant test  -Dtestcase=TestBasicModelIne 
> -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 
> -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11508) Make coreRootDirectory configurable via an environment variable (SOLR_CORE_HOME)

2017-12-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280584#comment-16280584
 ] 

Shawn Heisey commented on SOLR-11508:
-

bq. What benefit exactly would a "rename" of solr.data.home to solr.index.home 
give? 

The idea would be to clear up confusion.  Based on how this issue started and 
progressed, it seems that there's some confusion about what "data" means.  The 
initial expectation seems to have been that it would cover ALL of Solr's data, 
including the conf directory, but in fact it only deals with the *index* data, 
so solr.index.home seems like a better name for the property.

That confusion is also the reason that I mentioned the possibility of replacing 
solr.solr.home with solr.data.home.  Although the idea passes a sniff test, it 
might cause confusion of a different kind for veterans, so it wouldn't be my 
first preference.

Currently we have three things that can be configured, in chronological order:  
solr.solr.home, coreRootDirectory, and solr.data.home.  All of these have uses, 
but I think the end result is particularly confusing for novices.

The reason I think we should kill coreRootDirectory: When I take a step back 
and think about everything, I find little value in separating what's in the 
solr home (solr.xml and configsets) from the rest of the configuration data.

I do find value in separating the config from the index data.  That makes it a 
lot easier to keep configurations in source control, and if you find yourself 
in a place where you want to delete all index data but leave all the cores 
intact, it's REALLY easy.

If I think about what the best option would be if we could start over, I come 
up with the notion of having two configurations -- one for everything that's 
not read-only (the solr home), and one for index data (currently 
solr.data.home).

Accommodating an empty data volume for the solr home location is the last 
wrinkle, and is solved by not *requiring* solr.xml.  SolrCloud can already 
handle an empty solr home, standalone should too.


> Make coreRootDirectory configurable via an environment variable 
> (SOLR_CORE_HOME)
> 
>
> Key: SOLR-11508
> URL: https://issues.apache.org/jira/browse/SOLR-11508
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Marc Morissette
>
> (Heavily edited)
> Since Solr 7, it is possible to store Solr cores in separate disk locations 
> using solr.data.home (see SOLR-6671). This is very useful when running Solr 
> in Docker where data must be stored in a directory which is independent from 
> the rest of the container.
> While this works well in standalone mode, it doesn't in Cloud mode as the 
> core.properties automatically created by Solr are still stored in 
> coreRootDirectory and cores created that way disappear when the Solr Docker 
> container is redeployed.
> The solution is to configure coreRootDirectory to an empty directory that can 
> be mounted outside the Docker container.
> The incoming patch makes this easier to do by allowing coreRootDirectory to 
> be configured via a solr.core.home system property and SOLR_CORE_HOME 
> environment variable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.2-Windows (64bit/jdk-9.0.1) - Build # 3 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.2-Windows/3/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

7 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_67459F905E2F12AD-001\4.8.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_67459F905E2F12AD-001\4.8.1-cfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_67459F905E2F12AD-001\4.8.1-cfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_67459F905E2F12AD-001\4.8.1-cfs-001

at __randomizedtesting.SeedInfo.seed([67459F905E2F12AD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.lucene.replicator.IndexReplicationClientTest.testNoUpdateThread

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_3AC093E90D8E2816-001\replicationClientTest-004\1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_3AC093E90D8E2816-001\replicationClientTest-004\1
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_3AC093E90D8E2816-001\replicationClientTest-004\1:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.2-Windows\lucene\build\replicator\test\J1\temp\lucene.replicator.IndexReplicationClientTest_3AC093E90D8E2816-001\replicationClientTest-004\1

at 
__randomizedtesting.SeedInfo.seed([3AC093E90D8E2816:5FD950842E0E71D0]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.replicator.PerSessionDirectoryFactory.cleanupSession(PerSessionDirectoryFactory.java:58)
at 
org.apache.lucene.replicator.ReplicationClient.doUpdate(ReplicationClient.java:259)
at 
org.apache.lucene.replicator.ReplicationClient.updateNow(ReplicationClient.java:401)
at 
org.apache.lucene.replicator.IndexReplicationClientTest.testNoUpdateThread(IndexReplicationClientTest.java:163)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 

[jira] [Commented] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280579#comment-16280579
 ] 

Michael McCandless commented on LUCENE-8081:


+1

Maybe expand the javadoc a bit to state that this means the thread calling 
{{refresh}} will be the only thread writing new segments to disk, unless 
flushing falls behind?

> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
> Attachments: LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2017-12-06 Thread arjenm
Github user arjenm commented on the pull request:


https://github.com/apache/lucene-solr/commit/962313b83ba9c69379e1f84dffc881a361713ce9#commitcomment-26093062
  
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java:
In solr/core/src/java/org/apache/solr/util/SolrPluginUtils.java on line 731:
@jpountz What I tried to tell, but appearantly failed, is: the boost-float 
is collected and adjusted via the recursion, but its value is not used.

So what was a "increase boost with parent's boost" now is a "collect parent 
boost, ignore it". Or in other terms, the result of this method is exactly the 
same when the boost-float is removed from it.

I'd expect this method should re-introduce that boost-value somewhere in 
the new flattened query.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 7.2

2017-12-06 Thread Andrzej Białecki
I attached the patch to SOLR-11714, which disables the ‘searchRate’ trigger - 
if there are no objections I’ll commit it shortly to branch_7.2.

> On 6 Dec 2017, at 15:51, Andrzej Białecki  
> wrote:
> 
> 
>> On 6 Dec 2017, at 15:35, Andrzej Białecki > > wrote:
>> 
>> SOLR-11458 is committed and resolved - thanks for the patience.
> 
> 
> Actually, one more thing … ;) SOLR-11714 is a more serious bug in a new 
> feature (searchRate autoscaling trigger). It’s probably best to disable this 
> feature in 7.2 rather than releasing a broken version, so I’d like to commit 
> a patch that disables it (plus a note in CHANGES.txt).
> 
> 
>> 
>> 
>> 
>>> On 6 Dec 2017, at 14:02, Adrien Grand >> > wrote:
>>> 
>>> Thanks for the heads up, Anshum.
>>> 
>>> This leaves us with only SOLR-11458 to wait for before building a RC (which 
>>> might be ready but just not marked as resolved).
>> 
>>> 
>>> 
>>> Le mer. 6 déc. 2017 à 13:47, Ishan Chattopadhyaya 
>>> > a écrit :
>>> Hi Adrien,
>>> I'm planning to skip SOLR-11624 for this release (as per my last comment 
>>> https://issues.apache.org/jira/browse/SOLR-11624?focusedCommentId=16280121=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16280121
>>>  
>>> ).
>>>  If someone has an objection, please let me know; otherwise, please feel 
>>> free to proceed with the release.
>>> I'll continue working on it anyway, and shall try to have it ready for the 
>>> next release.
>>> Thanks,
>>> Ishan
>>> 
>>> On Wed, Dec 6, 2017 at 2:41 PM, Adrien Grand >> > wrote:
>>> FYI I created the new branch for 7.2, so you will have to backport to this 
>>> branch. No hurry though, I mostly created the branch so that it's fine to 
>>> cherry-pick changes that may wait for 7.3 to be released.
>>> 
>>> Le mer. 6 déc. 2017 à 08:53, Adrien Grand >> > a écrit :
>>> Sorry to hear that Ishan, I hope you are doing better now. +1 to get 
>>> SOLR-11624 in.
>>> 
>>> Le mer. 6 déc. 2017 à 07:57, Ishan Chattopadhyaya 
>>> > a écrit :
>>> I was a bit unwell over the weekend and yesterday; I'm working on a very 
>>> targeted fix for SOLR-11624 right now; I expect it to take another 5-6 
>>> hours.
>>> Is that fine with you, Adrien? If not, please go ahead with the release, 
>>> and I'll volunteer later for a bugfix release for this after 7.2 is out.
>>> 
>>> On Wed, Dec 6, 2017 at 3:25 AM, Adrien Grand >> > wrote:
>>> Fine with me.
>>> 
>>> 
>>> Le mar. 5 déc. 2017 à 22:34, Varun Thacker >> > a écrit :
>>> Hi Adrien,
>>> 
>>> I'd like to commit SOLR-11590 . The issue had a patch couple of weeks ago 
>>> and has been reviewed but never got committed. I've run all the tests twice 
>>> as well to verify.
>>> 
>>> On Tue, Dec 5, 2017 at 9:08 AM, Andrzej Białecki 
>>> > 
>>> wrote:
>>> 
 On 5 Dec 2017, at 18:05, Adrien Grand > wrote:
 
 Andrzej, ok to merge since it is a bug fix. Since we're close to the RC 
 build, maybe try to get someone familiar with the code to review it to 
 make sure it doesn't have unexpected side-effects?
>>> 
>>> Sure I’ll do this - thanks!
>>> 
 
 Le mar. 5 déc. 2017 à 17:57, Andrzej Białecki 
 > 
 a écrit :
 Adrien,
 
 If it’s ok I would also like to merge SOLR-11458, this significantly 
 reduces the chance of accidental data loss when using MoveReplicaCmd.
 
> On 5 Dec 2017, at 14:44, Adrien Grand  > wrote:
> 
> Quick update:
> 
> LUCENE-8043, SOLR-9137, SOLR-11662 and SOLR-11687 have been merged, they 
> will be in 7.2.
> 
> LUCENE-8048 and SOLR-11624 are still open. 
> 
> LUCENE-8048 looks like it could make things better in some cases but I 
> don't think it is required for 7.2, so I don't plan to hold the release 
> on it.
> 
> SOLR-11624 looks bad, I'll wait for it.
> 
> Le mar. 5 déc. 2017 à 07:45, Noble Paul  > a écrit :
> +1 this is over due
> 
> On Dec 4, 2017 16:38, "Varun Thacker"  > wrote:
> +1 Adrien ! Thanks for taking it up
> 
> On Fri, Dec 

[jira] [Updated] (SOLR-11714) AddReplicaSuggester endless loop

2017-12-06 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11714:
-
Attachment: 7.2-disable-search-rate-trigger.diff

This patch disables the {{searchRate}} trigger in branch 7.2.

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21043 - Still Unstable!

2017-12-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21043/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.ForceLeaderTest

Error Message:
85 threads leaked from SUITE scope at org.apache.solr.cloud.ForceLeaderTest:
 1) Thread[id=8774, name=qtp1798632905-8774, state=RUNNABLE, 
group=TGRP-ForceLeaderTest] at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) 
at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:265)   
  at 
java.base@9.0.1/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:92)
 at 
java.base@9.0.1/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)  
   at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)2) 
Thread[id=8775, name=qtp1798632905-8775, state=TIMED_WAITING, 
group=TGRP-ForceLeaderTest] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2104)
 at 
app//org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=9006, name=zkCallback-1806-thread-3, state=TIMED_WAITING, 
group=TGRP-ForceLeaderTest] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:462)
 at 
java.base@9.0.1/java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:361)
 at 
java.base@9.0.1/java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:937)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1091)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1152)
 at 
java.base@9.0.1/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)4) 
Thread[id=8809, name=qtp2102183014-8809, state=RUNNABLE, 
group=TGRP-ForceLeaderTest] at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.epollWait(Native Method) 
at 
java.base@9.0.1/sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:265)   
  at 
java.base@9.0.1/sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:92)
 at 
java.base@9.0.1/sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)   
  at java.base@9.0.1/sun.nio.ch.SelectorImpl.select(SelectorImpl.java:101)  
   at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.select(ManagedSelector.java:243)
 at 
app//org.eclipse.jetty.io.ManagedSelector$SelectorProducer.produce(ManagedSelector.java:191)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:249)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
 at 
app//org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
 at 

[jira] [Commented] (LUCENE-8075) Possible null pointer dereference in core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280546#comment-16280546
 ] 

ASF GitHub Bot commented on LUCENE-8075:


Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155304417
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

Thanks. That works for me.


> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java
> -
>
> Key: LUCENE-8075
> URL: https://issues.apache.org/jira/browse/LUCENE-8075
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 7.1
>Reporter: Xiaoshan Sun
>  Labels: easyfix
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java.
> at line 119. The fr.index may be NULL. This result is based on static 
> analysis tools and the details are shown below:
> *
> {code:java}
> 106: if (fr.index == null) {
> 107:  fstReader = null;  // fr.index is Known NULL here.
> } else {
>   fstReader = fr.index.getBytesReader();
> }
> // TODO: if the automaton is "smallish" we really
> // should use the terms index to seek at least to
> // the initial term and likely to subsequent terms
> // (or, maybe just fallback to ATE for such cases).
> // Else the seek cost of loading the frames will be
> // too costly.
> 119:final FST.Arc arc = fr.index.getFirstArc(arcs[0]); 
> //  fr.index is dereferenced here and fr.index can be NULL if 107 is arrived.
> {code}
> *
> It is not sure if fr.index can be NULL in runtime.
> We think it is reasonable to fix it by a test if fr.index is NULL and an 
> error handling.
> --
> Please Refer to "Trusted Operating System and System Assurance Working Group, 
> TCA, Institute of Software, Chinese Academy of Sciences" in the 
> acknowledgement if applicable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #286: [LUCENE-8075] Possible null pointer dereferen...

2017-12-06 Thread imgpulak
Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155304417
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

Thanks. That works for me.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #286: [LUCENE-8075] Possible null pointer dereferen...

2017-12-06 Thread jpountz
Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155303664
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

All documentation is in javadocs, we don't have other documentation.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8075) Possible null pointer dereference in core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280541#comment-16280541
 ] 

ASF GitHub Bot commented on LUCENE-8075:


Github user jpountz commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155303664
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

All documentation is in javadocs, we don't have other documentation.


> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java
> -
>
> Key: LUCENE-8075
> URL: https://issues.apache.org/jira/browse/LUCENE-8075
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 7.1
>Reporter: Xiaoshan Sun
>  Labels: easyfix
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java.
> at line 119. The fr.index may be NULL. This result is based on static 
> analysis tools and the details are shown below:
> *
> {code:java}
> 106: if (fr.index == null) {
> 107:  fstReader = null;  // fr.index is Known NULL here.
> } else {
>   fstReader = fr.index.getBytesReader();
> }
> // TODO: if the automaton is "smallish" we really
> // should use the terms index to seek at least to
> // the initial term and likely to subsequent terms
> // (or, maybe just fallback to ATE for such cases).
> // Else the seek cost of loading the frames will be
> // too costly.
> 119:final FST.Arc arc = fr.index.getFirstArc(arcs[0]); 
> //  fr.index is dereferenced here and fr.index can be NULL if 107 is arrived.
> {code}
> *
> It is not sure if fr.index can be NULL in runtime.
> We think it is reasonable to fix it by a test if fr.index is NULL and an 
> error handling.
> --
> Please Refer to "Trusted Operating System and System Assurance Working Group, 
> TCA, Institute of Software, Chinese Academy of Sciences" in the 
> acknowledgement if applicable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8081:

Attachment: LUCENE-8081.patch

here is a patch to discuss this on.

> Allow IndexWriter to opt out of flushing on indexing threads
> 
>
> Key: LUCENE-8081
> URL: https://issues.apache.org/jira/browse/LUCENE-8081
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Simon Willnauer
> Attachments: LUCENE-8081.patch
>
>
> Today indexing / updating threads always help out flushing. Experts might 
> want indexing threads to only help flushing if flushes are falling behind. 
> Maybe we can allow an expert flag in IWC to opt out of this behavior.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8081) Allow IndexWriter to opt out of flushing on indexing threads

2017-12-06 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8081:
---

 Summary: Allow IndexWriter to opt out of flushing on indexing 
threads
 Key: LUCENE-8081
 URL: https://issues.apache.org/jira/browse/LUCENE-8081
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Simon Willnauer


Today indexing / updating threads always help out flushing. Experts might want 
indexing threads to only help flushing if flushes are falling behind. Maybe we 
can allow an expert flag in IWC to opt out of this behavior.





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2017-12-06 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280523#comment-16280523
 ] 

Steve Rowe commented on SOLR-10243:
---

Another recent reproducing failure, from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7039/]:

{noformat}
Checking out Revision ae9cc726a41d70c0b41b89e9fdcc11322cbe4599 
(refs/remotes/origin/master)
[...]
   [junit4] Suite: org.apache.solr.handler.extraction.TestExtractionDateUtil
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
-Dtests.seed=A656E57C0473E60B -Dtests.slow=true 
-Dtests.locale=th-TH-u-nu-thai-x-lvariant-TH 
-Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.03s J1 | TestExtractionDateUtil.testParseDate <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Incorrect parsed 
timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([A656E57C0473E60B:EC4F9D497FDA91BE]:0)
   [junit4]>at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
   [junit4]>at 
org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene70), 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@1e107b8),
 locale=th-TH-u-nu-thai-x-lvariant-TH, timezone=America/Metlakatla
   [junit4]   2> NOTE: Windows 10 10.0 x86/Oracle Corporation 1.8.0_144 
(32-bit)/cpus=3,threads=1,free=50406992,total=67108864
{noformat}

> Fix TestExtractionDateUtil.testParseDate sporadic failures
> --
>
> Key: SOLR-10243
> URL: https://issues.apache.org/jira/browse/SOLR-10243
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>
> Jenkins test failure:
> {{ant test  -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
> -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv 
> -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8}}   It reproduces on 6x for me but not master.
> I reviewed this briefly and there seems to be a locale assumption in the test.
> 1 tests failed.
> FAILED:  
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
> Error Message:
> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 
> 04:35:51 AKST 2008)
> Stack Trace:
> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
> at 
> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8075) Possible null pointer dereference in core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280522#comment-16280522
 ] 

ASF GitHub Bot commented on LUCENE-8075:


Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155300117
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

Hi @jpountz, 

May I get the design documents of Lucene?

Thanks, 
Pulak


> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java
> -
>
> Key: LUCENE-8075
> URL: https://issues.apache.org/jira/browse/LUCENE-8075
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 7.1
>Reporter: Xiaoshan Sun
>  Labels: easyfix
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java.
> at line 119. The fr.index may be NULL. This result is based on static 
> analysis tools and the details are shown below:
> *
> {code:java}
> 106: if (fr.index == null) {
> 107:  fstReader = null;  // fr.index is Known NULL here.
> } else {
>   fstReader = fr.index.getBytesReader();
> }
> // TODO: if the automaton is "smallish" we really
> // should use the terms index to seek at least to
> // the initial term and likely to subsequent terms
> // (or, maybe just fallback to ATE for such cases).
> // Else the seek cost of loading the frames will be
> // too costly.
> 119:final FST.Arc arc = fr.index.getFirstArc(arcs[0]); 
> //  fr.index is dereferenced here and fr.index can be NULL if 107 is arrived.
> {code}
> *
> It is not sure if fr.index can be NULL in runtime.
> We think it is reasonable to fix it by a test if fr.index is NULL and an 
> error handling.
> --
> Please Refer to "Trusted Operating System and System Assurance Working Group, 
> TCA, Institute of Software, Chinese Academy of Sciences" in the 
> acknowledgement if applicable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #286: [LUCENE-8075] Possible null pointer dereferen...

2017-12-06 Thread imgpulak
Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155300117
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

Hi @jpountz, 

May I get the design documents of Lucene?

Thanks, 
Pulak


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8015) TestBasicModelIne.testRandomScoring failure

2017-12-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8015?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280508#comment-16280508
 ] 

Robert Muir commented on LUCENE-8015:
-

Took a glance, I am good with this approach, thank you! I would like to combine 
your patch with my test patch (attached to this issue) though, because it makes 
the test much better for all sims not just this particular case by exercising 
the extremes explicitly.

> TestBasicModelIne.testRandomScoring failure
> ---
>
> Key: LUCENE-8015
> URL: https://issues.apache.org/jira/browse/LUCENE-8015
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Adrien Grand
> Attachments: LUCENE-8015-test.patch, LUCENE-8015.patch, 
> LUCENE-8015_test_fangs.patch
>
>
> reproduce with: ant test  -Dtestcase=TestBasicModelIne 
> -Dtests.method=testRandomScoring -Dtests.seed=86E85958B1183E93 
> -Dtests.slow=true -Dtests.locale=vi-VN -Dtests.timezone=Pacific/Tongatapu 
> -Dtests.asserts=true -Dtests.file.encoding=UTF8



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8075) Possible null pointer dereference in core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java

2017-12-06 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16280502#comment-16280502
 ] 

ASF GitHub Bot commented on LUCENE-8075:


Github user imgpulak commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/286#discussion_r155296495
  
--- Diff: 
lucene/core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java 
---
@@ -106,37 +106,37 @@ public IntersectTermsEnum(FieldReader fr, Automaton 
automaton, RunAutomaton runA
 if (fr.index == null) {
   fstReader = null;
--- End diff --

Okay. Now I understand. Thanks! Let me know what we are going to do. I will 
make changes accordingly. 


> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java
> -
>
> Key: LUCENE-8075
> URL: https://issues.apache.org/jira/browse/LUCENE-8075
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/codecs
>Affects Versions: 7.1
>Reporter: Xiaoshan Sun
>  Labels: easyfix
>   Original Estimate: 10m
>  Remaining Estimate: 10m
>
> Possible null pointer dereference in 
> core/src/java/org/apache/lucene/codecs/blocktree/IntersectTermsEnum.java.
> at line 119. The fr.index may be NULL. This result is based on static 
> analysis tools and the details are shown below:
> *
> {code:java}
> 106: if (fr.index == null) {
> 107:  fstReader = null;  // fr.index is Known NULL here.
> } else {
>   fstReader = fr.index.getBytesReader();
> }
> // TODO: if the automaton is "smallish" we really
> // should use the terms index to seek at least to
> // the initial term and likely to subsequent terms
> // (or, maybe just fallback to ATE for such cases).
> // Else the seek cost of loading the frames will be
> // too costly.
> 119:final FST.Arc arc = fr.index.getFirstArc(arcs[0]); 
> //  fr.index is dereferenced here and fr.index can be NULL if 107 is arrived.
> {code}
> *
> It is not sure if fr.index can be NULL in runtime.
> We think it is reasonable to fix it by a test if fr.index is NULL and an 
> error handling.
> --
> Please Refer to "Trusted Operating System and System Assurance Working Group, 
> TCA, Institute of Software, Chinese Academy of Sciences" in the 
> acknowledgement if applicable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >