[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527189#comment-16527189
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit cb7d6e9e13a4f07b7c01bd929252e80b4a56c388 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cb7d6e9 ]

SOLR-11985: added validation and test


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11985) Allow percentage in replica attribute in policy

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527188#comment-16527188
 ] 

ASF subversion and git services commented on SOLR-11985:


Commit 62b9cbc6f9566b4a93462852698fb9d97d80b2fa in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=62b9cbc ]

SOLR-11985: added validation and test


> Allow percentage in replica attribute in policy
> ---
>
> Key: SOLR-11985
> URL: https://issues.apache.org/jira/browse/SOLR-11985
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Noble Paul
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11985.patch, SOLR-11985.patch
>
>
> Today we can only specify an absolute number in the 'replica' attribute in 
> the policy rules. It'd be useful to write a percentage value to make certain 
> use-cases easier. For example:
> {code:java}
> // Keep a third of the the replicas of each shard in east region
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> // Keep two thirds of the the replicas of each shard in west region
> {"replica" : "<67%", "shard" : "#EACH", "sysprop:region": "west"}
> {code}
> Today the above must be represented by different rules for each collection if 
> they have different replication factors. Also if the replication factor 
> changes later, the absolute value has to be changed in tandem. So expressing 
> a percentage removes both of these restrictions.
> This feature means that the value of the attribute {{"replica"}} is only 
> available just in time. We call such values {{"computed values"}} . The 
> computed value for this attribute depends on other attributes as well. 
>  Take the following 2 rules
> {code:java}
> //example 1
> {"replica" : "<34%", "shard" : "#EACH", "sysprop:region": "east"}
> //example 2
> {"replica" : "<34%",  "sysprop:region": "east"}
> {code}
> assume we have collection {{"A"}} with 2 shards and {{replicationFactor=3}}
> *example 1* would mean that the value of replica is computed as
> {{3 * 34 / 100 = 1.02}}
> Which means *_for each shard_* keep less than 1.02 replica in east 
> availability zone
>  
> *example 2* would mean that the value of replica is computed as 
> {{3 * 2 * 34 / 100 = 2.04}}
>  
> which means _*for each collection*_ keep less than 2.04 replicas on east 
> availability zone



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22343 - Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22343/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([716A5B318439FB67]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([716A5B318439FB67]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:158)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1574 - Still Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1574/

1 tests failed.
FAILED:  
org.apache.solr.update.SoftAutoCommitTest.testHardCommitWithinAndSoftCommitMaxTimeMixedAdds

Error Message:
Tracker reports too many soft commits expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: Tracker reports too many soft commits expected:<1> 
but was:<2>
at 
__randomizedtesting.SeedInfo.seed([221449B83BEEB73D:E1F395A90AD26040]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.update.SoftAutoCommitTest.doTestSoftAndHardCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:273)
at 
org.apache.solr.update.SoftAutoCommitTest.testHardCommitWithinAndSoftCommitMaxTimeMixedAdds(SoftAutoCommitTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 661 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/661/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.sim.TestExecutePlanAction.testExecute

Error Message:
last state: DocCollection(testExecute//clusterstate.json/23)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{"shard1":{   "replicas":{ 
"core_node1":{   "core":"testExecute_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10009_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0},  
   "core_node2":{   "core":"testExecute_shard1_replica_n2", 
  "SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,
   "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10010_solr", 
  "state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0},  
   "core_node4":{   "node_name":"127.0.0.1:10009_solr",   
"core":"testExecute_shard1_replica_n3",   "state":"active",   
"INDEX.sizeInBytes":10240,   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6}},   "range":"8000-7fff",   
"state":"active"}}}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testExecute//clusterstate.json/23)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{"shard1":{
  "replicas":{
"core_node1":{
  "core":"testExecute_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10009_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node2":{
  "core":"testExecute_shard1_replica_n2",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10010_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0},
"core_node4":{
  "node_name":"127.0.0.1:10009_solr",
  "core":"testExecute_shard1_replica_n3",
  "state":"active",
  "INDEX.sizeInBytes":10240,
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([ECA675A584D02DB5:DD1667A89627D232]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:111)
at 
org.apache.solr.cloud.autoscaling.sim.TestExecutePlanAction.testExecute(TestExecutePlanAction.java:152)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2217 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2217/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestPullReplicaErrorHandling

Error Message:
file handle leaks: 
[FileChannel(/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestPullReplicaErrorHandling_94901E0BB4ED2FBD-001/index-MMapDirectory-006/write.lock)]

Stack Trace:
java.lang.RuntimeException: file handle leaks: 
[FileChannel(/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestPullReplicaErrorHandling_94901E0BB4ED2FBD-001/index-MMapDirectory-006/write.lock)]
at __randomizedtesting.SeedInfo.seed([94901E0BB4ED2FBD]:0)
at org.apache.lucene.mockfile.LeakFS.onClose(LeakFS.java:63)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:77)
at 
org.apache.lucene.mockfile.FilterFileSystem.close(FilterFileSystem.java:78)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:228)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.Exception
at org.apache.lucene.mockfile.LeakFS.onOpen(LeakFS.java:46)
at 
org.apache.lucene.mockfile.HandleTrackingFS.callOpenHook(HandleTrackingFS.java:81)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:197)
at 
org.apache.lucene.mockfile.HandleTrackingFS.newFileChannel(HandleTrackingFS.java:166)
at java.base/java.nio.channels.FileChannel.open(FileChannel.java:292)
at java.base/java.nio.channels.FileChannel.open(FileChannel.java:340)
at 
org.apache.lucene.store.NativeFSLockFactory.obtainFSLock(NativeFSLockFactory.java:125)
at 
org.apache.lucene.store.FSLockFactory.obtainLock(FSLockFactory.java:41)
at 
org.apache.lucene.store.BaseDirectory.obtainLock(BaseDirectory.java:45)
at 
org.apache.lucene.store.FilterDirectory.obtainLock(FilterDirectory.java:105)
at 
org.apache.lucene.store.MockDirectoryWrapper.obtainLock(MockDirectoryWrapper.java:1049)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:718)
at 
org.apache.solr.update.SolrIndexWriter.(SolrIndexWriter.java:124)
at 
org.apache.solr.update.SolrIndexWriter.create(SolrIndexWriter.java:97)
at 
org.apache.solr.update.DefaultSolrCoreState.createMainIndexWriter(DefaultSolrCoreState.java:257)
at 
org.apache.solr.update.DefaultSolrCoreState.changeWriter(DefaultSolrCoreState.java:220)
at 
org.apache.solr.update.DefaultSolrCoreState.openIndexWriter(DefaultSolrCoreState.java:245)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:630)
at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:347)
at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:421)
at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1156)
at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514)
at 
java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305)
at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
... 1 more




Build Log:
[...truncated 14320 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestPullReplicaErrorHandling
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.cloud.TestPullReplicaErrorHandling_94901E0BB4ED2FBD-001/init-core-data-001
   [junit4]   2> 1379612 WARN  

[jira] [Comment Edited] (SOLR-12523) Confusing error reporting if backup attempted on non-shared FS

2018-06-28 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527029#comment-16527029
 ] 

Hrishikesh Gadre edited comment on SOLR-12523 at 6/29/18 1:51 AM:
--

{quote}So for me, separating the concerns of creating the snapshot for each 
shard (Solr's job) and moving big files out to cloud storage (Solr needs to do 
much better in this regard or punt) is what I'm looking for.
{quote}
[~thelabdude] this is the exact use case for which we added snapshots mechanism 
(Ref: SOLR-9038). As part of Cloudera Search, we use this functionality to 
provide backup and disaster recovery functionality for Solr,

[https://blog.cloudera.com/blog/2017/05/how-to-backup-and-disaster-recovery-for-apache-solr-part-i/]

 

When user creates a snapshot, Solr associates user specified snapshot name with 
the latest commit point for each core associated with the given collection. 
Once the snapshot is created, Solr ensures that the files associated with the 
commit point associated with the snapshot name are not deleted (e.g. as part of 
optimize operation). It also records the snapshot metadata in Zookeeper and 
provides access to it via Collections API. Now you are free to use any 
mechanism to copy these index files to remote location (e.g. in our case we use 
DistCp - a tool specifically designed for large scale data copy which also 
works well with cloud object stores). I agree with your point about slow 
restore operation. May be we can extend the snapshot API to restore in-place ? 
e.g. create index.xxx directory automatically and copy the files. Once this is 
done, we can just switch the index directory on-the-fly (just the way we do at 
the time of full replication as part of core recovery). 

 

 

 


was (Author: hgadre):
{quote}So for me, separating the concerns of creating the snapshot for each 
shard (Solr's job) and moving big files out to cloud storage (Solr needs to do 
much better in this regard or punt) is what I'm looking for.
{quote}
[~thelabdude] this is the exact use case for which we added snapshots mechanism 
(Ref: SOLR-9038). As part of Cloudera Search, we use this functionality to 
provide backup and disaster recovery functionality for Solr,

[https://blog.cloudera.com/blog/2017/05/how-to-backup-and-disaster-recovery-for-apache-solr-part-i/]

 

When user creates a snapshot, Solr associates user specified snapshot name with 
the latest commit point for each core associated with the given collection. 
Once the snapshot is created, Solr ensures that the files associated with the 
commit point associated with the snapshot name are not deleted (e.g. as part of 
optimize operation). It also records the snapshot metadata in Zookeeper and 
provides access to it via Collections API. Now you are free to use any 
mechanism to copy these index files to remote location (e.g. in our case we use 
DistCp - a tool specifically designed large scale data copy which also works 
well with cloud object stores). I agree with your point about slow restore 
operation. May be we can extend the snapshot API to restore in-place ? e.g. 
create index.xxx directory automatically and copy the files. Once this is done, 
we can just switch the index directory on-the-fly (just the way we do at the 
time of full replication as part of core recovery). 

 

 

 

> Confusing error reporting if backup attempted on non-shared FS
> --
>
> Key: SOLR-12523
> URL: https://issues.apache.org/jira/browse/SOLR-12523
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.3.1
>Reporter: Timothy Potter
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12523.patch
>
>
> So I have a large collection with 4 shards across 2 nodes. When I try to back 
> it up with:
> {code}
> curl 
> "http://localhost:8984/solr/admin/collections?action=BACKUP=sigs=foo_signals=5=backups;
> {code}
> I either get:
> {code}
> "5170256188349065":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard1_replica_n2 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
>   "5170256187999044":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard3_replica_n10 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
> {code}
> or if I create the directory, then I get:
> {code}
> {
>   

[jira] [Commented] (SOLR-12523) Confusing error reporting if backup attempted on non-shared FS

2018-06-28 Thread Hrishikesh Gadre (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527029#comment-16527029
 ] 

Hrishikesh Gadre commented on SOLR-12523:
-

{quote}So for me, separating the concerns of creating the snapshot for each 
shard (Solr's job) and moving big files out to cloud storage (Solr needs to do 
much better in this regard or punt) is what I'm looking for.
{quote}
[~thelabdude] this is the exact use case for which we added snapshots mechanism 
(Ref: SOLR-9038). As part of Cloudera Search, we use this functionality to 
provide backup and disaster recovery functionality for Solr,

[https://blog.cloudera.com/blog/2017/05/how-to-backup-and-disaster-recovery-for-apache-solr-part-i/]

 

When user creates a snapshot, Solr associates user specified snapshot name with 
the latest commit point for each core associated with the given collection. 
Once the snapshot is created, Solr ensures that the files associated with the 
commit point associated with the snapshot name are not deleted (e.g. as part of 
optimize operation). It also records the snapshot metadata in Zookeeper and 
provides access to it via Collections API. Now you are free to use any 
mechanism to copy these index files to remote location (e.g. in our case we use 
DistCp - a tool specifically designed large scale data copy which also works 
well with cloud object stores). I agree with your point about slow restore 
operation. May be we can extend the snapshot API to restore in-place ? e.g. 
create index.xxx directory automatically and copy the files. Once this is done, 
we can just switch the index directory on-the-fly (just the way we do at the 
time of full replication as part of core recovery). 

 

 

 

> Confusing error reporting if backup attempted on non-shared FS
> --
>
> Key: SOLR-12523
> URL: https://issues.apache.org/jira/browse/SOLR-12523
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.3.1
>Reporter: Timothy Potter
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12523.patch
>
>
> So I have a large collection with 4 shards across 2 nodes. When I try to back 
> it up with:
> {code}
> curl 
> "http://localhost:8984/solr/admin/collections?action=BACKUP=sigs=foo_signals=5=backups;
> {code}
> I either get:
> {code}
> "5170256188349065":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard1_replica_n2 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
>   "5170256187999044":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard3_replica_n10 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
> {code}
> or if I create the directory, then I get:
> {code}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":2},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  The backup directory already exists: file:///vol1/cloud84/backups/sigs/",
>   "exception":{
>     "msg":"The backup directory already exists: 
> file:///vol1/cloud84/backups/sigs/",
>     "rspCode":400},
>   "status":{
>     "state":"failed",
>     "msg":"found [2] in failed tasks"}}
> {code}
> I'm thinking this has to do with having 2 cores from the same collection on 
> the same node but I can't get a collection with 1 shard on each node to work 
> either:
> {code}
> "ec2-52-90-245-38.compute-1.amazonaws.com:8984_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://ec2-52-90-245-38.compute-1.amazonaws.com:8984/solr: 
> Failed to backup core=system_jobs_history_shard2_replica_n6 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/ugh1"}
> {code}
> What's weird is that replica (system_jobs_history_shard2_replica_n6) is not 
> even on the ec2-52-90-245-38.compute-1.amazonaws.com node! It lives on a 
> different node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved LUCENE-8370.

   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

Both failures Steve pointed up were the same root cause. Thanks for pointing 
them out Steve.

I also updated the Javadocs and ref guide as per Mike's question, stating that 
maxSegments is implemented on a "best effort" basis.

TieredMergePolicy didn't change code-wise, just comments.

> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8370.patch
>
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527027#comment-16527027
 ] 

ASF subversion and git services commented on LUCENE-8370:
-

Commit 1f5c75cb9a3704db395cd13140005130dcf726c0 in lucene-solr's branch 
refs/heads/branch_7x from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1f5c75c ]

LUCENE-8370: Reproducing 
TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() 
failures

(cherry picked from commit c303c5f)


> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-8370.patch
>
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16527021#comment-16527021
 ] 

ASF subversion and git services commented on LUCENE-8370:
-

Commit c303c5f126bd6ea26bf651684041f7cb499bf579 in lucene-solr's branch 
refs/heads/master from Erick Erickson
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c303c5f ]

LUCENE-8370: Reproducing 
TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() 
failures


> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-8370.patch
>
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated LUCENE-8370:
---
Attachment: LUCENE-8370.patch

> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-8370.patch
>
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8375) Remove "Lucene Fields" checkboxes "New" and "Patch Available" from JIRA issues

2018-06-28 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-8375:
---
Review Patch?:   (was: Yes)

> Remove "Lucene Fields" checkboxes "New" and "Patch Available" from JIRA issues
> --
>
> Key: LUCENE-8375
> URL: https://issues.apache.org/jira/browse/LUCENE-8375
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Trivial
>
> The LUCENE JIRA project includes a set of checkboxes called "Lucene Fields": 
> "New" and "Patch Available".
> I think we should remove these, since AFAIK they are never used.  Also, given 
> the "Patch Available" status used by Yetus to enable automatic patch review, 
> it's confusing to have a separate, *unrelated* checkbox with the same label.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8375) Remove "Lucene Fields" checkboxes "New" and "Patch Available" from JIRA issues

2018-06-28 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-8375:
---
Review Patch?: Yes

> Remove "Lucene Fields" checkboxes "New" and "Patch Available" from JIRA issues
> --
>
> Key: LUCENE-8375
> URL: https://issues.apache.org/jira/browse/LUCENE-8375
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Trivial
>
> The LUCENE JIRA project includes a set of checkboxes called "Lucene Fields": 
> "New" and "Patch Available".
> I think we should remove these, since AFAIK they are never used.  Also, given 
> the "Patch Available" status used by Yetus to enable automatic patch review, 
> it's confusing to have a separate, *unrelated* checkbox with the same label.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [DISCUSS] Request for review of proposed LUCENE/SOLR JIRA workflow change

2018-06-28 Thread Steve Rowe
> On Jun 28, 2018, at 7:48 PM, Steve Rowe  wrote:
> 
>>> ok and these Lucene Fields, two checkboxes, New and Patch Available... I 
>>> just don't think we really use this but I should raise this separately.
>> 
>> I think we should remove these.  In a chat on Infra Hipchat, Gavin offered 
>> to do this, but since the Lucene PMC has control of this (as part of “screen 
>> configuration”, which is separate from “workflow” configuration), I told him 
>> we would tackle it ourselves.
> 
> I’ll make a JIRA for this.

Done: https://issues.apache.org/jira/browse/LUCENE-8375

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8375) Remove "Lucene Fields" checkboxes "New" and "Patch Available" from JIRA issues

2018-06-28 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-8375:
--

 Summary: Remove "Lucene Fields" checkboxes "New" and "Patch 
Available" from JIRA issues
 Key: LUCENE-8375
 URL: https://issues.apache.org/jira/browse/LUCENE-8375
 Project: Lucene - Core
  Issue Type: Task
Reporter: Steve Rowe
Assignee: Steve Rowe


The LUCENE JIRA project includes a set of checkboxes called "Lucene Fields": 
"New" and "Patch Available".

I think we should remove these, since AFAIK they are never used.  Also, given 
the "Patch Available" status used by Yetus to enable automatic patch review, 
it's confusing to have a separate, *unrelated* checkbox with the same label.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2216 - Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2216/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([BC9E6F48623E9B31]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:138)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([BC9E6F48623E9B31]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:158)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: [DISCUSS] Request for review of proposed LUCENE/SOLR JIRA workflow change

2018-06-28 Thread Steve Rowe
The new workflow is now enabled for LUCENE and SOLR JIRA projects.

The new workflow differs in a few respects from my previous summary - see 
details inline below:

> On Jun 19, 2018, at 1:53 PM, Steve Rowe  wrote:
> 
> Summary of the workflow changes: 
> 
> 1. The “Submit Patch” button will be relabeled “Attach Patch”, and will bring 
> up the dialog to attach a patch, with a simultaneous comment (rather than 
> just changing the issue status).  This button will remain visible regardless 
> of issue status, so that it can be used to attach more patches.

The new button label was changed to “Attach Files”, since it can be used to 
attach non-patch files.

> 2. In the “Attach Patch” dialog, there will be a checkbox labeled “Enable 
> Automatic Patch Validation”, which will be checked by default.  If checked, 
> the issue’s status will transition to “Patch Available” (which signals Yetus 
> to perform automatic patch validation); if not checked, the patch will be 
> attached but no status transition will occur. NOTE: Gavin is still working on 
> adding this checkbox, so it’s not demo’d on INFRATEST1 issues yet, but he 
> says it’s doable and that he’ll work on it tomorrow, Australia time.
 
Since Gavin couldn’t get the “Enable Automatic Patch Validation” checkbox 
functionality to work, attaching a file using the “Attach Files” dialog will 
never perform any status transitions at all.  Instead, users will 
enable/disable automatic patch validation via the “Enable Patch Review” and 
“Cancel Patch Review” buttons.

> 3. When in “Patch Available” status, a button labeled “Cancel Patch Review” 
> will be visible; clicking on it will transition the issue status to “Open”, 
> thus disabling automatic patch review.
> 
> 4. The “Start Progress”/“Stop Progress”/“In Progress” aspects of the workflow 
> have been removed, because if they remain, JIRA creates a “Workflow” menu and 
> puts the “Attach Patch” button under it, which kind of defeats its purpose: 
> an obvious way to submit contributions.  I asked Gavin to remove the 
> “Progress” related aspects of the workflow because I don’t think they’re 
> being used except on a limited ad-hoc basis, not part of a conventional 
> workflow.
> -
> 
> Separate issue: on the thread where Cassandra moved the “Enviroment” field 
> below “Description” on the Create JIRA dialog[4], David Smiley wrote[5]:
> 
>> ok and these Lucene Fields, two checkboxes, New and Patch Available... I 
>> just don't think we really use this but I should raise this separately.
> 
> I think we should remove these.  In a chat on Infra Hipchat, Gavin offered to 
> do this, but since the Lucene PMC has control of this (as part of “screen 
> configuration”, which is separate from “workflow” configuration), I told him 
> we would tackle it ourselves.

I’ll make a JIRA for this.

> [1] Enable Yetus for LUCENE/SOLR: 
> https://issues.apache.org/jira/browse/INFRA-15213
> [2] Modify LUCENE/SOLR Yetus-enabling workflow: 
> https://issues.apache.org/jira/browse/INFRA-16094
> [3] Demo of proposed LUCENE/SOLR workflow: 
> https://issues.apache.org/jira/projects/INFRATEST1
> [4] Cassandra fixes Create JIRA dialog: 
> https://lists.apache.org/thread.html/0efebe2fb08c7584421422d6005401a987a2b54bf604ae317b6e102f@%3Cdev.lucene.apache.org%3E
> [5] David Smiley says "Lucene fields” are unused: 
> https://lists.apache.org/thread.html/a17bd3b5797c12903d3c6bacb348e8b4325c59609765964527412ba4@%3Cdev.lucene.apache.org%3E

--
Steve
www.lucidworks.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12516) JSON "range" facets can incorrectly refine subfacets for buckets

2018-06-28 Thread Hoss Man (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-12516:

Description: 
while simple {{type:range}} facets don't benefit from refinement, because every 
shard returns the same set of buckets, some bugs currently exist when a range 
facet contains sub facets that use refinement:

# the optional {{other}} buckets (before/after/between) are not considered 
during refinement
# when using the {{include}} option: if {{edge}} is specified, then the 
refinement of all range buckets mistakenly includes the lower bound of the 
range, regardless of whether {{lower}} was specified.



#1 occurs because {{FacetRangeMerger extends 
FacetRequestSortedMerger}} ... however {{FacetRangeMerger}} does 
not override {{getRefinement(...)}} which means only 
{{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
refinement. The additional, special purpose, {{FacetBucket}} instances tracked 
in {{FacetRangeMerger}} are never considered for refinement.

#2 exists because of a mistaken in the implementation of {{refineBucket}} and 
how it computes the {{start}} value.

  was:
{{FacetRangeMerger extends FacetRequestSortedMerger}} ... however 
{{FacetRangeMerger}} does not override {{getRefinement(...)}} which means only 
{{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
refinement. The additional, special purpose, {{FacetBucket}} instances tracked 
in {{FacetRangeMerger}} are never considered for refinement.

In a simple range facet this doesn't cause any problems because these buckets 
are returned by every shard on the phase#1 request -- but *if a sub-facet (such 
as a field facet) is nested under a range facet then the buckets returned by 
the sub-facets for the before/between/after buckets will never be refined* ... 
the phase#1 sub-facet buckets will be merged as.

Summary: JSON "range" facets can incorrectly refine subfacets for 
buckets  (was: JSON "range" facets don't refine sub-facets under special 
buckets (before,after,between))

revising issue summary & description based on expanded findings of the error 
cases

> JSON "range" facets can incorrectly refine subfacets for buckets
> 
>
> Key: SOLR-12516
> URL: https://issues.apache.org/jira/browse/SOLR-12516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
> Attachments: SOLR-12516.patch, SOLR-12516.patch
>
>
> while simple {{type:range}} facets don't benefit from refinement, because 
> every shard returns the same set of buckets, some bugs currently exist when a 
> range facet contains sub facets that use refinement:
> # the optional {{other}} buckets (before/after/between) are not considered 
> during refinement
> # when using the {{include}} option: if {{edge}} is specified, then the 
> refinement of all range buckets mistakenly includes the lower bound of the 
> range, regardless of whether {{lower}} was specified.
> 
> #1 occurs because {{FacetRangeMerger extends 
> FacetRequestSortedMerger}} ... however {{FacetRangeMerger}} does 
> not override {{getRefinement(...)}} which means only 
> {{FacetRequestSortedMerger.buckets}} is evaluated and considered for 
> refinement. The additional, special purpose, {{FacetBucket}} instances 
> tracked in {{FacetRangeMerger}} are never considered for refinement.
> #2 exists because of a mistaken in the implementation of {{refineBucket}} and 
> how it computes the {{start}} value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12525) UnsupportedOperationException when running Solr 5.3 with JDK10

2018-06-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12525.

Resolution: Not A Bug

Closing as not a bug. Java 9 and later is only supported from Solr 7.x. Please 
upgrade or run with Java8 which is probably the best Java version for your Solr 
version.

> UnsupportedOperationException when running Solr 5.3 with JDK10
> --
>
> Key: SOLR-12525
> URL: https://issues.apache.org/jira/browse/SOLR-12525
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 5.3.1
>Reporter: Ethan Li
>Priority: Major
>
> Although Solr 5.3.1 document says that it runs with JDK7 or above, but when 
> we are trying to use JDK10 to run Solr 5.3.1 and we are facing some problems:
> We removed the following JAVA options in solr.in.sh as what SOLR suggest 
> because it wont start:
> UseConcMarkSweepGC
>  UseParNewGC
>  PrintHeapAtGC
>  PrintGCDateStamps
>  PrintGCTimeStamps
>  PrintTenuringDistribution
>  PrintGCApplicationStoppedTime
> And the options left in solr.in.sh:
>  # Enable verbose GC logging
>  GC_LOG_OPTS="-verbose:gc -XX:+PrintGCDetails"
>  # These GC settings have shown to work well for a number of common Solr 
> workloads
>  GC_TUNE="-XX:NewRatio=3 \
>  -XX:SurvivorRatio=4 \
>  -XX:TargetSurvivorRatio=90 \
>  -XX:MaxTenuringThreshold=8 \
>  -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
>  -XX:+CMSScavengeBeforeRemark \
>  -XX:PretenureSizeThreshold=64m \
>  -XX:+UseCMSInitiatingOccupancyOnly \
>  -XX:CMSInitiatingOccupancyFraction=50 \
>  -XX:CMSMaxAbortablePrecleanTime=6000 \
>  -XX:+CMSParallelRemarkEnabled \
>  -XX:+ParallelRefProcEnabled"
> After that SOLR runs but it got an error:
> [0.001s][warning][gc] -Xloggc is deprecated. Will use 
> -Xlog:gc:/solr/logs/solr_gc.log instead.
>  [0.001s][warning][gc] -XX:+PrintGCDetails is deprecated. Will use -Xlog:gc* 
> instead.
>  [0.003s][info ][gc] Using Serial
>  WARNING: System properties and/or JVM args set. Consider using --dry-run or 
> --exec
>  0 INFO (main) [ ] o.e.j.u.log Logging initialized @532ms
>  205 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
>  218 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
>  220 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor 
> [file:/home/solr/solr-5.3.1/server/contexts/|file:///home/solr/solr-5.3.1/server/contexts/]
>  at interval 0
>  559 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for 
> /solr, did not find org.apache.jasper.servlet.JspServlet
>  569 WARN (main) [ ] o.e.j.s.SecurityHandler 
> ServletContext@o.e.j.w.WebAppContext@1a75e76a
> {/solr,file:/home/solr/solr-5.3.1/server/solr-webapp/webapp/,STARTING}
> {/home/solr/solr-5.3.1/server/solr-webapp/webapp} has uncovered http methods 
> for path: /
>  577 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
> WebAppClassLoader=1904783235@7188af83
>  625 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr 
> (NoInitialContextEx)
>  626 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property 
> solr.solr.home: /solr/data
>  627 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for 
> directory: '/solr/data/'
>  750 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration 
> from /solr/data/solr.xml
>  817 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: 
> /solr/data
>  [1.402s][info ][gc] GC(0) Pause Full (Metadata GC Threshold) 85M->7M(490M) 
> 37.281ms
>  875 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 1193398802
>  875 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer 
> [instanceDir=/solr/data/]
>  875 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: 
> /solr/data/lib
>  875 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) 
> directory to add to classloader: lib (resolved as: /solr/data/lib).
>  889 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with 
> socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
> 20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 
> 2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
> false,useRetries : false,
>  1036 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler 
> HTTP client with params: socketTimeout=60=6=true
>  1038 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is 
> org.slf4j.impl.Log4jLoggerFactory
>  1039 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j 
> (org.slf4j.impl.Log4jLoggerFactory)]
>  1040 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. 
> Skipping setup for authorization module.
>  1041 INFO (main) [ ] 

[JENKINS] Lucene-Solr-Tests-master - Build # 2577 - Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2577/

2 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudPivotFacet.test

Error Message:
KeeperErrorCode = Session expired for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /clusterstate.json
at 
__randomizedtesting.SeedInfo.seed([DCD73BE5052DFF58:5483043FABD192A0]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:130)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1215)
at 
org.apache.solr.common.cloud.SolrZkClient.lambda$getData$5(SolrZkClient.java:341)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:341)
at 
org.apache.solr.common.cloud.ZkStateReader.refreshLegacyClusterState(ZkStateReader.java:567)
at 
org.apache.solr.common.cloud.ZkStateReader.forceUpdateCollection(ZkStateReader.java:371)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:681)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:676)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:471)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:341)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1006)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 250 - Still Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/250/

6 tests failed.
FAILED:  org.apache.solr.TestCrossCoreJoin.testJoin

Error Message:
mismatch: '1'!='5' @ response/docs/[0]/id

Stack Trace:
java.lang.RuntimeException: mismatch: '1'!='5' @ response/docs/[0]/id
at 
__randomizedtesting.SeedInfo.seed([B342DA41149789B6:8E71EFCB75C08E20]:0)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:1005)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:952)
at 
org.apache.solr.TestCrossCoreJoin.doTestJoin(TestCrossCoreJoin.java:88)
at org.apache.solr.TestCrossCoreJoin.testJoin(TestCrossCoreJoin.java:79)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.TestCrossCoreJoin.testScoreJoin

Error Message:
mismatch: '1'!='5' @ response/docs/[0]/id

Stack Trace:
java.lang.RuntimeException: mismatch: '1'!='5' @ response/docs/[0]/id
at 

[jira] [Resolved] (SOLR-12529) Ref Guide: clean up how to publish Ref Guide docs

2018-06-28 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-12529.
--
Resolution: Fixed

> Ref Guide: clean up how to publish Ref Guide docs
> -
>
> Key: SOLR-12529
> URL: https://issues.apache.org/jira/browse/SOLR-12529
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.5
>
>
> When I first wrote the How to Publish the Ref Guide docs 
> ({{solr/solr-ref-guide/meta-docs/publish.adoc}} I assumed that PDF and HTML 
> versions would be built & released separately. That's not the case - I always 
> do them at the same time, but I rely on the docs for each step and find 
> myself having to jump back and forth across the page.
> This will merge the separate PDF and HTML sections into a single seamless 
> process that covers both versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12529) Ref Guide: clean up how to publish Ref Guide docs

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526760#comment-16526760
 ] 

ASF subversion and git services commented on SOLR-12529:


Commit 4212da569232624d803cece13f5f6b5d0695b30f in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4212da5 ]

SOLR-12529: clean up how to publish ref guide docs


> Ref Guide: clean up how to publish Ref Guide docs
> -
>
> Key: SOLR-12529
> URL: https://issues.apache.org/jira/browse/SOLR-12529
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.5
>
>
> When I first wrote the How to Publish the Ref Guide docs 
> ({{solr/solr-ref-guide/meta-docs/publish.adoc}} I assumed that PDF and HTML 
> versions would be built & released separately. That's not the case - I always 
> do them at the same time, but I rely on the docs for each step and find 
> myself having to jump back and forth across the page.
> This will merge the separate PDF and HTML sections into a single seamless 
> process that covers both versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12529) Ref Guide: clean up how to publish Ref Guide docs

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526759#comment-16526759
 ] 

ASF subversion and git services commented on SOLR-12529:


Commit 38c33de24c2f008ab4e010823cb1006170d109e5 in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=38c33de ]

SOLR-12529: clean up how to publish ref guide docs


> Ref Guide: clean up how to publish Ref Guide docs
> -
>
> Key: SOLR-12529
> URL: https://issues.apache.org/jira/browse/SOLR-12529
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Cassandra Targett
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.5
>
>
> When I first wrote the How to Publish the Ref Guide docs 
> ({{solr/solr-ref-guide/meta-docs/publish.adoc}} I assumed that PDF and HTML 
> versions would be built & released separately. That's not the case - I always 
> do them at the same time, but I rely on the docs for each step and find 
> myself having to jump back and forth across the page.
> This will merge the separate PDF and HTML sections into a single seamless 
> process that covers both versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12326) Unnecessary refinement requests

2018-06-28 Thread Yonik Seeley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley reassigned SOLR-12326:
---

  Assignee: Yonik Seeley
Attachment: SOLR-12326.patch

Draft patch attached.  TestJsonFacetRefinement still fails, I assume because 
not all field faceting implementations return "more" yet.  More tests to be 
added as well.

> Unnecessary refinement requests
> ---
>
> Key: SOLR-12326
> URL: https://issues.apache.org/jira/browse/SOLR-12326
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12326.patch
>
>
> TestJsonFacets.testStatsDistrib() appears to result in more refinement 
> requests than would otherwise be expected.  Those tests were developed before 
> refinement was implemented and hence do not need refinement to generate 
> correct results due to limited numbers of buckets.  This should be detectable 
> by refinement code in the majority of cases to prevent extra work from being 
> done.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-06-28 Thread Mike Wingert (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526711#comment-16526711
 ] 

Mike Wingert commented on SOLR-12458:
-

The tests that failed don't seem to be caused by my patch.

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-28 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: SOLR-12458.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12529) Ref Guide: clean up how to publish Ref Guide docs

2018-06-28 Thread Cassandra Targett (JIRA)
Cassandra Targett created SOLR-12529:


 Summary: Ref Guide: clean up how to publish Ref Guide docs
 Key: SOLR-12529
 URL: https://issues.apache.org/jira/browse/SOLR-12529
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: documentation
Reporter: Cassandra Targett
Assignee: Cassandra Targett
 Fix For: 7.5


When I first wrote the How to Publish the Ref Guide docs 
({{solr/solr-ref-guide/meta-docs/publish.adoc}} I assumed that PDF and HTML 
versions would be built & released separately. That's not the case - I always 
do them at the same time, but I rely on the docs for each step and find myself 
having to jump back and forth across the page.

This will merge the separate PDF and HTML sections into a single seamless 
process that covers both versions.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #411: Debugging PriorityQueue.java

2018-06-28 Thread mikemccand
Github user mikemccand commented on the issue:

https://github.com/apache/lucene-solr/pull/411
  
Great, thanks @rsaavedraf!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 22340 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22340/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseG1GC

12 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([3C4A9AFBA481248]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.createMiniSolrCloudCluster(TestStressCloudBlindAtomicUpdates.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates

Error Message:


Stack Trace:
java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([3C4A9AFBA481248]:0)
at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.afterClass(TestStressCloudBlindAtomicUpdates.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 250 - Failure

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/250/

No tests ran.

Build Log:
[...truncated 24200 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2232 links (1782 relative) to 3000 anchors in 230 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml


Re: [jira] [Updated] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2018-06-28 Thread David Smiley
Yeah sure was.  This was the most thankless bug hunt I can remember because
it'll basically never actually occur in production Solr.  But we have a
failing test so...And the ultimate fix was already known (to me) any
way: Don't use SimpleDateFormat.  It sucks, I knew it before, and here's
more evidence why it does.  Yawn.  Move on to java.time...

On Thu, Jun 28, 2018 at 1:54 PM Erick Erickson 
wrote:

> Gnarly!
>
> On Thu, Jun 28, 2018 at 10:49 AM, David Smiley (JIRA) 
> wrote:
> >
> >  [
> https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
> >
> > David Smiley updated SOLR-10243:
> > 
> > Attachment: SimpleDateFormatTimeZoneBug.java
> >
> >> Fix TestExtractionDateUtil.testParseDate sporadic failures
> >> --
> >>
> >> Key: SOLR-10243
> >> URL: https://issues.apache.org/jira/browse/SOLR-10243
> >> Project: Solr
> >>  Issue Type: Task
> >>  Security Level: Public(Default Security Level. Issues are Public)
> >>Reporter: David Smiley
> >>Assignee: David Smiley
> >>Priority: Major
> >> Attachments: SimpleDateFormatTimeZoneBug.java
> >>
> >>
> >> Jenkins test failure:
> >> {{ant test  -Dtestcase=TestExtractionDateUtil
> -Dtests.method=testParseDate -Dtests.seed=B72AC4792F31F74B
> -Dtests.slow=true -Dtests.locale=lv -Dtests.timezone=America/Metlakatla
> -Dtests.asserts=true -Dtests.file.encoding=UTF-8}}   It reproduces on 6x
> for me but not master.
> >> I reviewed this briefly and there seems to be a locale assumption in
> the test.
> >> 1 tests failed.
> >> FAILED:
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
> >> Error Message:
> >> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13
> 04:35:51 AKST 2008)
> >> Stack Trace:
> >> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 !=
> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
> >> at
> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
> >> at org.junit.Assert.fail(Assert.java:93)
> >> at org.junit.Assert.assertTrue(Assert.java:43)
> >> at
> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
> >> at
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)
> >
> >
> >
> > --
> > This message was sent by Atlassian JIRA
> > (v7.6.3#76005)
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-28 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: (was: HBASE-15320.master.13.patch)

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 655 - Still Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/655/

1 tests failed.
FAILED:  org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic

Error Message:
{} expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: {} expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([7FFE4C7F3C3189ED:D404516AE3ED0FC3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest.testBasic(SolrRrdBackendFactoryTest.java:90)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14795 lines...]
   [junit4] Suite: org.apache.solr.metrics.rrd.SolrRrdBackendFactoryTest
   [junit4]   2> Creating dataDir: 

[jira] [Updated] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10243:

   Priority: Minor  (was: Major)
Component/s: contrib - Solr Cell (Tika extraction)

It's debatable if this is a Solr bug or not since bin/solr starts up with UTC.  
Oddly this is configurable – I have no idea why anyone would want to change 
that as we go out of our way to try to make everything independent of the 
default timezone.

> Fix TestExtractionDateUtil.testParseDate sporadic failures
> --
>
> Key: SOLR-10243
> URL: https://issues.apache.org/jira/browse/SOLR-10243
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Solr Cell (Tika extraction)
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Attachments: SimpleDateFormatTimeZoneBug.java
>
>
> Jenkins test failure:
> {{ant test  -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
> -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv 
> -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8}}   It reproduces on 6x for me but not master.
> I reviewed this briefly and there seems to be a locale assumption in the test.
> 1 tests failed.
> FAILED:  
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
> Error Message:
> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 
> 04:35:51 AKST 2008)
> Stack Trace:
> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
> at 
> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Updated] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2018-06-28 Thread Erick Erickson
Gnarly!

On Thu, Jun 28, 2018 at 10:49 AM, David Smiley (JIRA)  wrote:
>
>  [ 
> https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
>  ]
>
> David Smiley updated SOLR-10243:
> 
> Attachment: SimpleDateFormatTimeZoneBug.java
>
>> Fix TestExtractionDateUtil.testParseDate sporadic failures
>> --
>>
>> Key: SOLR-10243
>> URL: https://issues.apache.org/jira/browse/SOLR-10243
>> Project: Solr
>>  Issue Type: Task
>>  Security Level: Public(Default Security Level. Issues are Public)
>>Reporter: David Smiley
>>Assignee: David Smiley
>>Priority: Major
>> Attachments: SimpleDateFormatTimeZoneBug.java
>>
>>
>> Jenkins test failure:
>> {{ant test  -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
>> -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv 
>> -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
>> -Dtests.file.encoding=UTF-8}}   It reproduces on 6x for me but not master.
>> I reviewed this briefly and there seems to be a locale assumption in the 
>> test.
>> 1 tests failed.
>> FAILED:  
>> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
>> Error Message:
>> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 
>> 04:35:51 AKST 2008)
>> Stack Trace:
>> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
>> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
>> at 
>> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
>> at org.junit.Assert.fail(Assert.java:93)
>> at org.junit.Assert.assertTrue(Assert.java:43)
>> at 
>> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
>> at 
>> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #411: Debugging PriorityQueue.java

2018-06-28 Thread rsaavedraf
Github user rsaavedraf commented on the issue:

https://github.com/apache/lucene-solr/pull/411
  
Thanks as well @mikemccand !
Happily tweeting about this small change being accepted :)
https://twitter.com/RaulSaavedra6/status/1012389058146861056



---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-10243:

Attachment: SimpleDateFormatTimeZoneBug.java

> Fix TestExtractionDateUtil.testParseDate sporadic failures
> --
>
> Key: SOLR-10243
> URL: https://issues.apache.org/jira/browse/SOLR-10243
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SimpleDateFormatTimeZoneBug.java
>
>
> Jenkins test failure:
> {{ant test  -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
> -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv 
> -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8}}   It reproduces on 6x for me but not master.
> I reviewed this briefly and there seems to be a locale assumption in the test.
> 1 tests failed.
> FAILED:  
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
> Error Message:
> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 
> 04:35:51 AKST 2008)
> Stack Trace:
> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
> at 
> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10243) Fix TestExtractionDateUtil.testParseDate sporadic failures

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526604#comment-16526604
 ] 

David Smiley commented on SOLR-10243:
-

After spending the better part of a day debugging into the bowls of 
SimpleDateFormat, I believe I found a JDK bug. I filled it to Oracle's Java bug 
parade, which gave me internal review ID 9055824 and I'll be reached again in 
the future after Oracle makes some decision on it. Here's the bug summary 
description:
{quote}If SimpleDateFormat is configured with a pattern that allows for an 
ambiguous timezone (like AKST in English Locale), and if that timezone is an 
alias for the current platform/default timezone (such as America/Metlakatla), 
then the input is parsed using the platform/default timezone. The objective of 
many server Java applications is to be able to parse dates/times insensitive to 
whatever the platform time zone may be but in this case it seems impossible.

My analysis using a debugger is that SimpleDateFormat line 1683 (of 
subParseZoneString) contains what appears to be an optimization to avoid a 
brute force time zone table lookup. This optimization is triggered when the 
default time zone has a matching zone alias.

This bug was found in a randomized test for Apache Solr's "extraction" contrib 
module: https://issues.apache.org/jira/browse/SOLR-10243
{quote}
I'll attach a demo source file that illustrates the problem.

w.r.t. Solr, I propose switching to the java.time API for this functionality.

> Fix TestExtractionDateUtil.testParseDate sporadic failures
> --
>
> Key: SOLR-10243
> URL: https://issues.apache.org/jira/browse/SOLR-10243
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
>
> Jenkins test failure:
> {{ant test  -Dtestcase=TestExtractionDateUtil -Dtests.method=testParseDate 
> -Dtests.seed=B72AC4792F31F74B -Dtests.slow=true -Dtests.locale=lv 
> -Dtests.timezone=America/Metlakatla -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8}}   It reproduces on 6x for me but not master.
> I reviewed this briefly and there seems to be a locale assumption in the test.
> 1 tests failed.
> FAILED:  
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate
> Error Message:
> Incorrect parsed timestamp: 1226583351000 != 1226579751000 (Thu Nov 13 
> 04:35:51 AKST 2008)
> Stack Trace:
> java.lang.AssertionError: Incorrect parsed timestamp: 1226583351000 != 
> 1226579751000 (Thu Nov 13 04:35:51 AKST 2008)
> at 
> __randomizedtesting.SeedInfo.seed([B72AC4792F31F74B:FD33BC4C549880FE]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.assertTrue(Assert.java:43)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.assertParsedDate(TestExtractionDateUtil.java:59)
> at 
> org.apache.solr.handler.extraction.TestExtractionDateUtil.testParseDate(TestExtractionDateUtil.java:54)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Custom FieldType not getting called while querying to get the field with custom fieldtype

2018-06-28 Thread Harsh Verma
Hi All,

I am trying to fetch values (docValues to be exact) from external redis during 
query time using a custom written FieldType which I am loading in my solr 
instance. I notice that my custom FieldType is being initialized based on the 
schema when loading the core. I also notice that during query time, my custom 
field type code is called but the results back from solr do not populate the 
custom field even if I explicitly request it. Also, I am asking the results to 
be sorted by the custom field but results are not sorted.

Solr version: 7.3.1
Here is my implementation: 
https://gist.github.com/vermaharsh/042e1cf07070c6d9b3b6cc7eaaf0b49c
Here is my  solrconfig.xml: 
https://gist.github.com/vermaharsh/97d7310b242fd7ba3d8d3a3bda209ac3
Here is my managed-schema: 
https://gist.github.com/vermaharsh/8a89195377802a6bbcdde9215a2fdaf5
Query that I am making: /solr/redis/select?fl=hits=*:*=hits%20asc
Response that I get back: 
https://gist.github.com/vermaharsh/2f63282b10320c4c35a9f85016fe30c0

Another query with debug flag: 
solr/redis/select?debugQuery=on=*,hits=*:*=hits%20desc
Response: https://gist.github.com/vermaharsh/f7f74a65a5403ecec9310ceb81cb674c
Sort order should be - 10, 9, 8, 6, 5, 4, 3, 2, 1, 7; because, corresponding 
hits value from redis are - 1, 900, 800, 600, 500, 400, 300, 200, 100, 7.

Can someone help me identify what am I missing?

Thanks,
Harsh

Copy of discussion on #solr irc channel.

 I am trying to write a custom solr FieldType which fetches value from 
external redis instance. I got Solr to load my custom jars and have defined the 
schema with custom fieldType class. But at query time, I cannot get values in 
the response.
[08:38]  I do not see any errors in the logs as well, so cannot tell if 
something failed
[08:38]  here is the code for my custom FieldType - 
https://gist.github.com/vermaharsh/042e1cf07070c6d9b3b6cc7eaaf0b49c
[08:39]  this is my solrconfig.xml for my custom configset - 
https://gist.github.com/vermaharsh/97d7310b242fd7ba3d8d3a3bda209ac3
[08:42]  I basically placed the necessary jars under contrib/redis-field
[08:43]  and my managed-schema - 
https://gist.github.com/vermaharsh/8a89195377802a6bbcdde9215a2fdaf5
[08:43]  I am using solr version 7.3.1
[08:44]  any idea why I am not getting the value for my custom field 
type back?
[08:45] <@elyograg> harsh: I'm not familiar with the API to that level.  FYI, 
this should go in #solr -- this channel is for development of Solr itself.
[08:46] <@elyograg> the field is not marked as stored.  I wonder if that might 
tell Solr that it shouldn't be returned at all.  (I don't know whether setting 
stored=true might require something more in your code, though)
[08:46]  I am using it as docValue, but I can try stored. Though, as you 
mentioned, not sure if that would need more to be implemented in the code
[08:47]  I will try #solr channel for the question as well
[08:47] <@elyograg> ah, I didn't scroll to the right enough to see that part. :)
[08:48] <@elyograg> I wonder if you might need useDocValuesAsStored="true".
[08:49]  I thought that is the default value
[08:51]  for completeness, this is the query that I am using - 
/solr/redis/select?fl=hits=*:*=hits%20asc
[08:52]  and response that I got back - 
https://gist.github.com/vermaharsh/2f63282b10320c4c35a9f85016fe30c0
[08:53] == dataminion [~le...@c-69-181-118-61.hsd1.ca.comcast.net] has joined 
#solr-dev
[08:53] <@elyograg> this one's probably going to need to go to the mailing 
list.  with all the pastes you've mentioned here.
[08:57] == dataminion [~le...@c-69-181-118-61.hsd1.ca.comcast.net] has quit 
[Ping timeout: 264 seconds]
[08:59] <@elyograg> you may be right about that being default.  The code in 
FieldType.java seems to support that.
[09:00] <@elyograg> if (schemaVersion >= 1.6f) properties |= 
USE_DOCVALUES_AS_STORED;
[09:02]  alright, thanks elyograg for looking into it. I will send this 
to mailing list as well.



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-9.0.4) - Build # 660 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/660/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Error from server at https://127.0.0.1:50617/solr: create the collection time 
out:180s

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:50617/solr: create the collection time out:180s
at __randomizedtesting.SeedInfo.seed([EEE24644FED9725F]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:69)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13773 lines...]
   [junit4] Suite: org.apache.solr.cloud.TestCloudRecovery
   [junit4]   2> 1293735 INFO  
(SUITE-TestCloudRecovery-seed#[EEE24644FED9725F]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J0\temp\solr.cloud.TestCloudRecovery_EEE24644FED9725F-001\init-core-data-001
   [junit4]   2> 1293737 WARN  
(SUITE-TestCloudRecovery-seed#[EEE24644FED9725F]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=21 numCloses=21
   [junit4]   2> 1293737 INFO  
(SUITE-TestCloudRecovery-seed#[EEE24644FED9725F]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 1293738 INFO  

[jira] [Commented] (SOLR-12523) Confusing error reporting if backup attempted on non-shared FS

2018-06-28 Thread Timothy Potter (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526566#comment-16526566
 ] 

Timothy Potter commented on SOLR-12523:
---

When I'm working on a cloud platform like EC2 or Google cloud, I don't want to 
deal with NFS when I have cloud storage like S3. I haven't had much luck in the 
past with using the Hdfs directory factory with S3 (I'll checkout SOLR-9952), 
so I figured I would just create the backup using Solr and then move the files 
out to cloud storage using tools optimized for S3. In the past, I think using 
an S3 destination for backup worked OK, but RESTORE took forever (all the check 
summing / sanity checking per file serially vs. concurrently) and given backup 
is usually part of a disaster recovery strategy, I don't want RESTORE taking 
hours to restore my index. If I pull that down from cloud storage to the local 
disks using some tool that's optimized for reading in bulk from S3 
(multi-threaded) and then restore from local, it's much faster. So for me, 
separating the concerns of creating the snapshot for each shard (Solr's job) 
and moving big files out to cloud storage (Solr needs to do much better in this 
regard or punt) is what I'm looking for.

> Confusing error reporting if backup attempted on non-shared FS
> --
>
> Key: SOLR-12523
> URL: https://issues.apache.org/jira/browse/SOLR-12523
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.3.1
>Reporter: Timothy Potter
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12523.patch
>
>
> So I have a large collection with 4 shards across 2 nodes. When I try to back 
> it up with:
> {code}
> curl 
> "http://localhost:8984/solr/admin/collections?action=BACKUP=sigs=foo_signals=5=backups;
> {code}
> I either get:
> {code}
> "5170256188349065":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard1_replica_n2 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
>   "5170256187999044":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard3_replica_n10 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
> {code}
> or if I create the directory, then I get:
> {code}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":2},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  The backup directory already exists: file:///vol1/cloud84/backups/sigs/",
>   "exception":{
>     "msg":"The backup directory already exists: 
> file:///vol1/cloud84/backups/sigs/",
>     "rspCode":400},
>   "status":{
>     "state":"failed",
>     "msg":"found [2] in failed tasks"}}
> {code}
> I'm thinking this has to do with having 2 cores from the same collection on 
> the same node but I can't get a collection with 1 shard on each node to work 
> either:
> {code}
> "ec2-52-90-245-38.compute-1.amazonaws.com:8984_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://ec2-52-90-245-38.compute-1.amazonaws.com:8984/solr: 
> Failed to backup core=system_jobs_history_shard2_replica_n6 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/ugh1"}
> {code}
> What's weird is that replica (system_jobs_history_shard2_replica_n6) is not 
> even on the ec2-52-90-245-38.compute-1.amazonaws.com node! It lives on a 
> different node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12528) Let PULL replicas replicate from replicas other than the leader

2018-06-28 Thread JIRA
Tomás Fernández Löbbe created SOLR-12528:


 Summary: Let PULL replicas replicate from replicas other than the 
leader
 Key: SOLR-12528
 URL: https://issues.apache.org/jira/browse/SOLR-12528
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Tomás Fernández Löbbe


Right now, PULL replicas can only replicate from the leader. This is good 
because it gives the PULL replicas the latest index available, but it can also 
cause a bottleneck in the leader. We should allow users to configure the PULL 
replicas to replicate from other replicas (other TLOG replicas for example, or 
maybe even other PULL replicas too)

This should be configurable (and probably not the default)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12527) turn core/test TestCloudSearcherWarming.Config into test-framework/ConfigRequest

2018-06-28 Thread Christine Poerschke (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526529#comment-16526529
 ] 

Christine Poerschke commented on SOLR-12527:


Attached proposed patch.

In future we could probably also merge some existing configsets e.g. 
[cdcr-cluster1|https://github.com/apache/lucene-solr/tree/releases/lucene-solr/7.4.0/solr/core/src/test-files/solr/configsets/cdcr-cluster1]
 and 
[cdcr-cluster2|https://github.com/apache/lucene-solr/tree/releases/lucene-solr/7.4.0/solr/core/src/test-files/solr/configsets/cdcr-cluster2]
 could be merged into one {{cdcr-cluster}} configset (which contains no 
{{/cdcr}} handler) and the 
[CdcrBidirectionalTest|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/core/src/test/org/apache/solr/cloud/cdcr/CdcrBidirectionalTest.java]
 test code could via the config api add the correct {{/cdcr}} handler for each 
of its two clusters.

> turn core/test TestCloudSearcherWarming.Config into 
> test-framework/ConfigRequest
> 
>
> Key: SOLR-12527
> URL: https://issues.apache.org/jira/browse/SOLR-12527
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12527.patch
>
>
> Tests can use this class e.g. to add a custom component or handler to an 
> otherwise generic configset.
> [CustomHighlightComponentTest.java#L138-L171|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/core/src/test/org/apache/solr/handler/component/CustomHighlightComponentTest.java#L138-L171]
>  illustrates the approach. 
> https://lucene.apache.org/solr/guide/7_4/config-api.html is the Solr 
> Reference Guide's Config API section.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12527) turn core/test TestCloudSearcherWarming.Config into test-framework/ConfigRequest

2018-06-28 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12527?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-12527:
---
Attachment: SOLR-12527.patch

> turn core/test TestCloudSearcherWarming.Config into 
> test-framework/ConfigRequest
> 
>
> Key: SOLR-12527
> URL: https://issues.apache.org/jira/browse/SOLR-12527
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12527.patch
>
>
> Tests can use this class e.g. to add a custom component or handler to an 
> otherwise generic configset.
> [CustomHighlightComponentTest.java#L138-L171|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/core/src/test/org/apache/solr/handler/component/CustomHighlightComponentTest.java#L138-L171]
>  illustrates the approach. 
> https://lucene.apache.org/solr/guide/7_4/config-api.html is the Solr 
> Reference Guide's Config API section.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12527) turn core/test TestCloudSearcherWarming.Config into test-framework/ConfigRequest

2018-06-28 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-12527:
--

 Summary: turn core/test TestCloudSearcherWarming.Config into 
test-framework/ConfigRequest
 Key: SOLR-12527
 URL: https://issues.apache.org/jira/browse/SOLR-12527
 Project: Solr
  Issue Type: Test
Reporter: Christine Poerschke
Assignee: Christine Poerschke


Tests can use this class e.g. to add a custom component or handler to an 
otherwise generic configset.

[CustomHighlightComponentTest.java#L138-L171|https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.4.0/solr/core/src/test/org/apache/solr/handler/component/CustomHighlightComponentTest.java#L138-L171]
 illustrates the approach. 
https://lucene.apache.org/solr/guide/7_4/config-api.html is the Solr Reference 
Guide's Config API section.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration for spatial search

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526510#comment-16526510
 ] 

David Smiley commented on LUCENE-7745:
--

np.  Oh this caught me by surprise too!  I though this was about BooleanScorer 
or postings or something and then low and behold it's spatial -- and then I 
thought this is so non-obvious by the issue title.  So I thought it'd do a 
little JIRA gardening.

> Explore GPU acceleration for spatial search
> ---
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration for spatial search

2018-06-28 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526506#comment-16526506
 ] 

Adrien Grand commented on LUCENE-7745:
--

Not sure why I confused names, I meant Ishan indeed. Sorry for that. I'll let 
Ishan decide how he wants to manage this issue, I'm personally fine either way, 
I'm mostly following. :) It just caught me by surprise since I was under the 
impression that we were still exploring which areas might benefit from GPU 
acceleration.

> Explore GPU acceleration for spatial search
> ---
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration for spatial search

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526486#comment-16526486
 ] 

David Smiley commented on LUCENE-7745:
--

Mark who?  You must mean Ishan?  I think that if GPUs are used to accelerate 
different things, then they would get separate issues and not be lumped under 
one issue.  Does that sound reasonable?  Granted the problem posted started off 
as a bit of an umbrella ticket and perhaps the particular proposal Ishan is 
presenting in his most recent comment ought to go in a new issue specific to 
spatial.Accelerating Haversine calculations sounds way different to me than 
BooleanScorer stuff; no?

> Explore GPU acceleration for spatial search
> ---
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #411: Debugging PriorityQueue.java

2018-06-28 Thread mikemccand
Github user mikemccand commented on the issue:

https://github.com/apache/lucene-solr/pull/411
  
OK I merged this change into Lucene master & 7.x, and added a simple unit 
test.

Thanks @rsaavedraf!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7745) Explore GPU acceleration for spatial search

2018-06-28 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526467#comment-16526467
 ] 

Adrien Grand commented on LUCENE-7745:
--

David, I'm not sure this was meant to be specific to lucene/spatial, Mark only 
mentioned it as a way to conduct an initial benchmark? The main thing that we 
identified as being a potential candidate for integration with Cuda is actually 
BooleanScorer (BS1, the one that does scoring in bulk) based on previous 
comments?

> Explore GPU acceleration for spatial search
> ---
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22339 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22339/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.search.TestRecovery.testExistOldBufferLog

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([76BEA893F049F230:28EEB5C67E8662B9]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at org.junit.Assert.assertFalse(Assert.java:79)
at 
org.apache.solr.search.TestRecovery.testExistOldBufferLog(TestRecovery.java:1071)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 15074 lines...]
   [junit4] Suite: org.apache.solr.search.TestRecovery
   [junit4]   2> 1786504 INFO  
(SUITE-TestRecovery-seed#[76BEA893F049F230]-worker) [] o.a.s.SolrTestCaseJ4 
SecureRandom sanity checks: test.solr.allowed.securerandom=null & 

[jira] [Updated] (LUCENE-7745) Explore GPU acceleration for spatial search

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7745:
-
Summary: Explore GPU acceleration for spatial search  (was: Explore GPU 
acceleration)

> Explore GPU acceleration for spatial search
> ---
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7745) Explore GPU acceleration

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-7745:
-
Component/s: modules/spatial-extras

> Explore GPU acceleration
> 
>
> Key: LUCENE-7745
> URL: https://issues.apache.org/jira/browse/LUCENE-7745
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial-extras
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
>Priority: Major
>  Labels: gsoc2017, mentor
> Attachments: gpu-benchmarks.png
>
>
> There are parts of Lucene that can potentially be speeded up if computations 
> were to be offloaded from CPU to the GPU(s). With commodity GPUs having as 
> high as 12GB of high bandwidth RAM, we might be able to leverage GPUs to 
> speed parts of Lucene (indexing, search).
> First that comes to mind is spatial filtering, which is traditionally known 
> to be a good candidate for GPU based speedup (esp. when complex polygons are 
> involved). In the past, Mike McCandless has mentioned that "both initial 
> indexing and merging are CPU/IO intensive, but they are very amenable to 
> soaking up the hardware's concurrency."
> I'm opening this issue as an exploratory task, suitable for a GSoC project. I 
> volunteer to mentor any GSoC student willing to work on this this summer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526431#comment-16526431
 ] 

David Smiley commented on SOLR-12362:
-

bq. Is the existence of the pseudo field for children recorded in the schema?

Not yet but I have proposed this within this umbrella issue.

bq. What if "data driven mode" is in use and someone tries to index a document 
with "child" : "foo" after that key has been used to index sub docs. Then the 
auto-add-fields-to-schema logic would record "child" as a string field in the 
schema and the next attempt to use it as a sub-doc key would fail.

How does that situation makes sense?  Implementation details aside, that 
scenario seems flawed on the part of the user and/or their data.


> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr search implement in magento 1

2018-06-28 Thread Walter Underwood
This is Solr used inside the Magento platform. I recommend asking in a Magento 
group.

https://community.magento.com/

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)

> On Jun 28, 2018, at 7:40 AM, David Smiley  wrote:
> 
> Hello,
> 
> This is the "dev" list for Lucene/Solr which is for the internals of 
> Lucene/Solr, not how to use Lucene/Solr.  Please join and post to the "Solr 
> User" list: http://lucene.apache.org/solr/community.html#mailing-lists-irc 
> 
> 
> BTW when you re-ask, consider trying to improve the wording, maybe get input 
> from a colleague who speaks English better.  I don't understand your inquiry.
> 
> ~ David
> 
> On Thu, Jun 28, 2018 at 10:23 AM Amit Hazra  > wrote:
> 
> Hi,
> 
> I have implement a solr search in magento 1.8.
> 
> But client Wants that some custom product attribute search as per attribute 
> value ="Parent", as per requirements i have merge query with custom atrribute 
> value = "Parent" and get the proper result.
> 
> And now all the product those have attribute value select Parent are coming.
> 
> Not: But everyday search result rebase to default all result, and when i have 
> save  all those product again from magento admin, it will come proper parent 
> base result.
> And i have to do it regularly.
> 
> So please tell why it will rebase search data everyday. And how to solve?.
> 
> 
> 
> -- 
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley 
>  | Book: 
> http://www.solrenterprisesearchserver.com 
> 


[jira] [Commented] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526427#comment-16526427
 ] 

Erick Erickson commented on LUCENE-8370:


OK, I'll get this done. 

Gah. The _comments_ say it but not the Javadocs. I'll fix. Hmmm, I need to 
check the ref guide for that too.

> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Support snapshot management functionality for a solr collection

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526425#comment-16526425
 ] 

David Smiley commented on SOLR-9038:


Sadly the snapshotscli.sh tool and the "indexBackup" parameter to the backup 
collection command aren't documented in the ref guide.  Collection level 
CREATESNAPSHOT, DELETESNAPSHOT, LISTSNAPSHOTS are not documented either -- 
though they are documented at a core level.  

If I'm reading the functionality right in hind-site (and it's been years), 
there seems to be some functionality gaps.  We can create collection snapshots 
but the only thing you can really _do_ with them is back them up (to a shared 
file system).  You can't *restore* them in-place AFAICT.  And there is burden 
on the user to ensure they delete old snapshots, otherwise I think they'd hang 
around forever.

> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>Priority: Major
> Fix For: 6.2
>
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-06-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526410#comment-16526410
 ] 

Jan Høydahl commented on SOLR-12362:


Thanks for showing this Jira issue.

Is the existence of the pseudo field for children recorded in the schema?

What if "data driven mode" is in use and someone tries to index a document with 
{{"child" : "foo"}} after that key has been used to index sub docs. Then the 
auto-add-fields-to-schema logic would record "child" as a string field in the 
schema and the next attempt to use it as a sub-doc key would fail.

So I still think that perhaps we should be explicit and add a (pseudo) 
NestedFieldType to the schema, but that nested field type would either be 
implicitly defined or just a NOP class that does nothing, just a way to reserve 
the field name for nested use.??
{code:java}

{code}

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-06-28 Thread Michal Hlavac (JIRA)
Michal Hlavac created SOLR-12526:


 Summary: Metrics History doesn't work with AuthenticationPlugin
 Key: SOLR-12526
 URL: https://issues.apache.org/jira/browse/SOLR-12526
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Authentication, metrics
Affects Versions: 7.4
Reporter: Michal Hlavac


Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make http 
requests to SOLR. But it doesnt work with AuthenticationPlugin. Since its 
enabled by default, there are errors in log every time 
{{MetricsHistoryHandler}} tries to collect data.
{code:java}
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://172.20.0.5:8983/solr: Expected mime type 
application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/admin/metrics. Reason:
    require authentication



   at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
 ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:55:14]
   at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
 ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:55:14]
   at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
 ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:55:14]
   at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:14]
   at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
 ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
jpountz - 2018-06-18 16:55:1
4]
   at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
 [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:14]
   at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
 [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18
16:55:14]
   at 
org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
 [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:14]
   at 
org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
 [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:14]
   at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]
   at 
org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]
   at 
org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
 [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - jpountz 
- 2018-06-18 16:55:13]
   at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
   at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) [?:?]
   at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
 [?:?]
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135) 
[?:?]
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) 
[?:?]
   at java.lang.Thread.run(Thread.java:844) [?:?]

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-9038) Support snapshot management functionality for a solr collection

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley closed SOLR-9038.
--

> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>Priority: Major
> Fix For: 6.2
>
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9038) Support snapshot management functionality for a solr collection

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-9038.

   Resolution: Fixed
Fix Version/s: 6.2

It appears mostly got done in 6.2.0 but the CLI tool got in 6.4.0

> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>Priority: Major
> Fix For: 6.2
>
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 899 - Still Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/899/

[...truncated 48 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/654/consoleText

[repro] Revision: 19e7466a79ae994c27bec449e980f646d77fcf99

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=test -Dtests.seed=6F0BB7BA17327D2A -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-PR -Dtests.timezone=Asia/Kuala_Lumpur 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
ab666ff9cfed0d816c58bf64ebf295f7f38f5cd1
[repro] git fetch
[repro] git checkout 19e7466a79ae994c27bec449e980f646d77fcf99

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   MoveReplicaHDFSTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.MoveReplicaHDFSTest" -Dtests.showOutput=onerror  
-Dtests.seed=6F0BB7BA17327D2A -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-PR -Dtests.timezone=Asia/Kuala_Lumpur -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 4559 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro] git checkout ab666ff9cfed0d816c58bf64ebf295f7f38f5cd1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-12458) ADLS support for SOLR

2018-06-28 Thread Mike Wingert (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Wingert updated SOLR-12458:

Attachment: HBASE-15320.master.13.patch

> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: HBASE-15320.master.13.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12523) Confusing error reporting if backup attempted on non-shared FS

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526389#comment-16526389
 ] 

David Smiley commented on SOLR-12523:
-

bq. why does it need a shared filesystem?

Can you explain how you thought or hoped this mechanism worked?  Perhaps you 
thought of it more as as an in-place snapshot mechanism -- SOLR-9038.  This 
feature is conceived of a way to back up everything to one place, and that one 
place needs to accessible to all nodes in the cluster -- hence a shared file 
system requirement.  It could be interesting if just one node has access to the 
backup destination and somehow you indicate which node that is.  

Also FYI Jeff Wartes / Whitepages.com has some cool utilities here: 
https://github.com/whitepages/solrcloud_manager#cluster-commands "backupindex", 
"restoreindex".

Thanks Jan for clarifying this issue is about the need for better error 
messages / documentation.

> Confusing error reporting if backup attempted on non-shared FS
> --
>
> Key: SOLR-12523
> URL: https://issues.apache.org/jira/browse/SOLR-12523
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.3.1
>Reporter: Timothy Potter
>Assignee: Jan Høydahl
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12523.patch
>
>
> So I have a large collection with 4 shards across 2 nodes. When I try to back 
> it up with:
> {code}
> curl 
> "http://localhost:8984/solr/admin/collections?action=BACKUP=sigs=foo_signals=5=backups;
> {code}
> I either get:
> {code}
> "5170256188349065":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard1_replica_n2 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
>   "5170256187999044":{
>     "responseHeader":{
>       "status":0,
>       "QTime":0},
>     "STATUS":"failed",
>     "Response":"Failed to backup core=foo_signals_shard3_replica_n10 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/sigs"},
> {code}
> or if I create the directory, then I get:
> {code}
> {
>   "responseHeader":{
>     "status":0,
>     "QTime":2},
>   "Operation backup caused 
> exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException:
>  The backup directory already exists: file:///vol1/cloud84/backups/sigs/",
>   "exception":{
>     "msg":"The backup directory already exists: 
> file:///vol1/cloud84/backups/sigs/",
>     "rspCode":400},
>   "status":{
>     "state":"failed",
>     "msg":"found [2] in failed tasks"}}
> {code}
> I'm thinking this has to do with having 2 cores from the same collection on 
> the same node but I can't get a collection with 1 shard on each node to work 
> either:
> {code}
> "ec2-52-90-245-38.compute-1.amazonaws.com:8984_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
>  from server at http://ec2-52-90-245-38.compute-1.amazonaws.com:8984/solr: 
> Failed to backup core=system_jobs_history_shard2_replica_n6 because 
> org.apache.solr.common.SolrException: Directory to contain snapshots doesn't 
> exist: file:///vol1/cloud84/backups/ugh1"}
> {code}
> What's weird is that replica (system_jobs_history_shard2_replica_n6) is not 
> even on the ec2-52-90-245-38.compute-1.amazonaws.com node! It lives on a 
> different node.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 896 - Unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/896/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-7.x/15/consoleText

[repro] Revision: c0853200f20e3dd874e025418f6919fd913c5523

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=MetricsHistoryHandlerTest 
-Dtests.seed=B10EF885029351A4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=de -Dtests.timezone=Asia/Pontianak -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest 
-Dtests.method=testCollectionReload -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=nn-NO -Dtests.timezone=Europe/Moscow -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.method=test 
-Dtests.seed=B10EF885029351A4 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=Pacific/Efate -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitWithChaosMonkey -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=Pacific/Efate -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitMixedReplicaTypes -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=Pacific/Efate -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-VE -Dtests.timezone=Asia/Jakarta -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-VE -Dtests.timezone=Asia/Jakarta -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestStressCloudBlindAtomicUpdates 
-Dtests.method=test_dv_stored_idx -Dtests.seed=B10EF885029351A4 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-CO -Dtests.timezone=Asia/Shanghai -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testNodeLost -Dtests.seed=B10EF885029351A4 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-EC -Dtests.timezone=America/Halifax -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestLargeCluster 
-Dtests.method=testAddNode -Dtests.seed=B10EF885029351A4 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-EC -Dtests.timezone=America/Halifax -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  

Re: Solr search implement in magento 1

2018-06-28 Thread David Smiley
Hello,

This is the "dev" list for Lucene/Solr which is for the internals of
Lucene/Solr, not how to use Lucene/Solr.  Please join and post to the "Solr
User" list: http://lucene.apache.org/solr/community.html#mailing-lists-irc

BTW when you re-ask, consider trying to improve the wording, maybe get
input from a colleague who speaks English better.  I don't understand your
inquiry.

~ David

On Thu, Jun 28, 2018 at 10:23 AM Amit Hazra  wrote:

> Hi,
>
> I have implement a solr search in magento 1.8.
>
> But client Wants that some custom product attribute search as per
> attribute value ="Parent", as per requirements i have merge query with
> custom atrribute value = "Parent" and get the proper result.
>
> And now all the product those have attribute value select Parent are
> coming.
>
> Not: But everyday search result rebase to default all result, and when i
> have save  all those product again from magento admin, it will come proper
> parent base result.
> And i have to do it regularly.
>
> So please tell why it will rebase search data everyday. And how to solve?.
>
>
>
> --
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-7314) Graduate LatLonPoint to core

2018-06-28 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526373#comment-16526373
 ] 

Michael McCandless commented on LUCENE-7314:


+1 to the patch after moving {{LatLonPointPrototypeQueries}} to {{oal.search}}; 
thanks [~nknize]!

> Graduate LatLonPoint to core
> 
>
> Key: LUCENE-7314
> URL: https://issues.apache.org/jira/browse/LUCENE-7314
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7314.patch, LUCENE-7314.patch
>
>
> Maybe we should graduate these fields (and related queries) to core for 
> Lucene 6.1?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12419) standardise solr/contrib (private) logger names

2018-06-28 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-12419.

   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> standardise solr/contrib (private) logger names
> ---
>
> Key: SOLR-12419
> URL: https://issues.apache.org/jira/browse/SOLR-12419
> Project: Solr
>  Issue Type: Wish
>  Components: logging
>Reporter: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12419.patch
>
>
> Standardise to {{log}} or {{LOG}} initially for {{solr/contrib}} code only, 
> could later incrementally be extended to cover other directories too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12418) contrib/prometheus-exporter (private) logger rename

2018-06-28 Thread Christine Poerschke (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-12418.

   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> contrib/prometheus-exporter (private) logger rename
> ---
>
> Key: SOLR-12418
> URL: https://issues.apache.org/jira/browse/SOLR-12418
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-12418.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526368#comment-16526368
 ] 

ASF subversion and git services commented on SOLR-12524:


Commit e224f0ed13376198079e513b82aaed1f01e43019 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e224f0e ]

SOLR-12524: mention ids in CdcrLogReader.forwardSeek's assert


> CdcrBidirectionalTest.testBiDir() regularly fails
> -
>
> Key: SOLR-12524
> URL: https://issues.apache.org/jira/browse/SOLR-12524
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12524.patch
>
>
> e.g. from 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4701/consoleText
> {code}
> [junit4] ERROR   20.4s J0 | CdcrBidirectionalTest.testBiDir <<<
> [junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=28371, 
> name=cdcr-replicator-11775-thread-1, state=RUNNABLE, 
> group=TGRP-CdcrBidirectionalTest]
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50:8F8E744E68278112]:0)
> [junit4]> Caused by: java.lang.AssertionError
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50]:0)
> [junit4]> at 
> org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
> [junit4]> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [junit4]> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12419) standardise solr/contrib (private) logger names

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526367#comment-16526367
 ] 

ASF subversion and git services commented on SOLR-12419:


Commit d0d1fbca0157c91ba54dd36f2ea49190851245f1 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d0d1fbc ]

SOLR-12419: standardise solr/contrib (private) logger names


> standardise solr/contrib (private) logger names
> ---
>
> Key: SOLR-12419
> URL: https://issues.apache.org/jira/browse/SOLR-12419
> Project: Solr
>  Issue Type: Wish
>  Components: logging
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12419.patch
>
>
> Standardise to {{log}} or {{LOG}} initially for {{solr/contrib}} code only, 
> could later incrementally be extended to cover other directories too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12418) contrib/prometheus-exporter (private) logger rename

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526366#comment-16526366
 ] 

ASF subversion and git services commented on SOLR-12418:


Commit 60e5e6445b4dd9858e52963672b6f5c861b625b1 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=60e5e64 ]

SOLR-12418: contrib/prometheus-exporter (private) logger rename


> contrib/prometheus-exporter (private) logger rename
> ---
>
> Key: SOLR-12418
> URL: https://issues.apache.org/jira/browse/SOLR-12418
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12418.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 898 - Still unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/898/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/249/consoleText

[repro] Revision: dc2c9e98632ec7ceb7fb1bee336ec0ecac377270

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=TestRandomChains 
-Dtests.method=testRandomChainsWithLargeStrings -Dtests.seed=319F02FFE4F7A86B 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=he-IL -Dtests.timezone=Pacific/Gambier -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=4CEC1947D1134C5F -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=hr-HR -Dtests.timezone=Africa/Bamako -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SharedFSAutoReplicaFailoverTest 
-Dtests.method=test -Dtests.seed=4CEC1947D1134C5F -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=Australia/Yancowinna 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
d2ac32368ee5547a995da83ccd82b96960902adf
[repro] git fetch
[repro] git checkout dc2c9e98632ec7ceb7fb1bee336ec0ecac377270

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SharedFSAutoReplicaFailoverTest
[repro]   CdcrBidirectionalTest
[repro]lucene/analysis/common
[repro]   TestRandomChains
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SharedFSAutoReplicaFailoverTest|*.CdcrBidirectionalTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=4CEC1947D1134C5F -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=en-MT -Dtests.timezone=Australia/Yancowinna 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 112 lines...]
[repro] ant compile-test

[...truncated 102 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestRandomChains" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.seed=319F02FFE4F7A86B -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-7.x/test-data/enwiki.random.lines.txt
 -Dtests.locale=he-IL -Dtests.timezone=Pacific/Gambier -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 197 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.SharedFSAutoReplicaFailoverTest
[repro]   0/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro]   4/5 failed: org.apache.lucene.analysis.core.TestRandomChains
[repro] git checkout d2ac32368ee5547a995da83ccd82b96960902adf

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8370) Reproducing TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields() failures

2018-06-28 Thread Michael McCandless (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526356#comment-16526356
 ] 

Michael McCandless commented on LUCENE-8370:


+1 to fix {{RandomIndexWriter}}'s assert to skip that check when TMP is in use. 
 Does {{TieredMergePolicy}}'s javadocs advertise that it's only a "best effort" 
now?

> Reproducing 
> TestLucene{54,70}DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()
>  failures
> 
>
> Key: LUCENE-8370
> URL: https://issues.apache.org/jira/browse/LUCENE-8370
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index, general/test
>Reporter: Steve Rowe
>Assignee: Erick Erickson
>Priority: Major
>
> Policeman Jenkins found a reproducing seed for a 
> {{TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields()}}
>  failure [https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22320/]; 
> {{git bisect}} blames commit {{2519025}} on LUCENE-7976:
> {noformat}
> Checking out Revision 8c714348aeea51df19e7603905f85995bcf0371c 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestLucene70DocValuesFormat 
> -Dtests.method=testSortedSetVariableLengthBigVsStoredFields 
> -Dtests.seed=63A61B46A6934B1A -Dtests.multiplier=3 -Dtests.slow=true 
> -Dtests.locale=sw-TZ -Dtests.timezone=Pacific/Pitcairn -Dtests.asserts=true 
> -Dtests.file.encoding=US-ASCII
>[junit4] FAILURE 23.3s J2 | 
> TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields <<<
>[junit4]> Throwable #1: java.lang.AssertionError: limit=4 actual=5
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([63A61B46A6934B1A:6BE93FA35E02851]:0)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.doRandomForceMerge(RandomIndexWriter.java:372)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:386)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.getReader(RandomIndexWriter.java:332)
>[junit4]>  at 
> org.apache.lucene.index.BaseDocValuesFormatTestCase.doTestSortedSetVsStoredFields(BaseDocValuesFormatTestCase.java:2155)
>[junit4]>  at 
> org.apache.lucene.codecs.lucene70.TestLucene70DocValuesFormat.testSortedSetVariableLengthBigVsStoredFields(TestLucene70DocValuesFormat.java:93)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene70): {}, 
> docValues:{}, maxPointsInLeafNode=693, maxMBSortInHeap=5.078503794479895, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@20a604e6),
>  locale=sw-TZ, timezone=Pacific/Pitcairn
>[junit4]   2> NOTE: Linux 4.13.0-41-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=352300304,total=518979584
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Solr search implement in magento 1

2018-06-28 Thread Amit Hazra
Hi,

http://alpha.sharbor.com/search/?fq[manufacturer]=Maxon=1

I have implement a solr search in magento 1.8.

But client Wants that some custom product attribute search as per attribute
value ="Parent", as per requirements i have merge query with custom
atrribute value = "Parent" and get the proper result.

And now all the product those have attribute value select Parent are coming.

Note: But everyday search result rebase to default all result, and when i
have save  all those product again from magento admin, it will come proper
parent base result.
And i have to update all those product regularly.

So please tell why it will rebase search data everyday. And how to solve?.






>
>
>


Solr search implement in magento 1

2018-06-28 Thread Amit Hazra
Hi,

I have implement a solr search in magento 1.8.

But client Wants that some custom product attribute search as per attribute
value ="Parent", as per requirements i have merge query with custom
atrribute value = "Parent" and get the proper result.

And now all the product those have attribute value select Parent are coming.

Not: But everyday search result rebase to default all result, and when i
have save  all those product again from magento admin, it will come proper
parent base result.
And i have to do it regularly.

So please tell why it will rebase search data everyday. And how to solve?.


[GitHub] lucene-solr issue #411: Debugging PriorityQueue.java

2018-06-28 Thread mikemccand
Github user mikemccand commented on the issue:

https://github.com/apache/lucene-solr/pull/411
  
Change looks great; I'll push.  Thanks @rsaavedraf!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12362) JSON loader should save the relationship of children

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526338#comment-16526338
 ] 

David Smiley commented on SOLR-12362:
-

[~janhoy] mentioned that it would be brittle if we differentiated a sub-json 
structure as a child doc if we were to do it by looking for the special values 
"add" or "set" (partial update commands) since child doc could conceivably want 
fields named as-such.

See JsonLoader's {{parseExtendedFieldValue()}}.  Today, line 597, it calls 
isChildDoc to differentiate, where we check for uniqueKey field.

Proposal: look at the SolrInputField parameter to look at the field we're 
adding this to (i.e. what key is this sub-json structure associated with?).  Is 
that a field in the schema?  If it is, then we have an "extended field value 
(partial update most likely).  If it is not, then we have a child document.  
This proposal of course requires that the field already exist, and in a "data 
driven mode", that might not be the case.  But in a "data driven mode", we 
shouldn't see "extended field values" at all until at least there is some data 
to partially update?

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12419) standardise solr/contrib (private) logger names

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12419?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526335#comment-16526335
 ] 

ASF subversion and git services commented on SOLR-12419:


Commit e1d2749b20a3b04beef08ab75a2c6deb4f2cdf41 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e1d2749 ]

SOLR-12419: standardise solr/contrib (private) logger names


> standardise solr/contrib (private) logger names
> ---
>
> Key: SOLR-12419
> URL: https://issues.apache.org/jira/browse/SOLR-12419
> Project: Solr
>  Issue Type: Wish
>  Components: logging
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12419.patch
>
>
> Standardise to {{log}} or {{LOG}} initially for {{solr/contrib}} code only, 
> could later incrementally be extended to cover other directories too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12418) contrib/prometheus-exporter (private) logger rename

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526334#comment-16526334
 ] 

ASF subversion and git services commented on SOLR-12418:


Commit f459bf4397c4201b3ddd47a9f00bbcc877351d5c in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f459bf4 ]

SOLR-12418: contrib/prometheus-exporter (private) logger rename


> contrib/prometheus-exporter (private) logger rename
> ---
>
> Key: SOLR-12418
> URL: https://issues.apache.org/jira/browse/SOLR-12418
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-12418.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-06-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526336#comment-16526336
 ] 

ASF subversion and git services commented on SOLR-12524:


Commit ab666ff9cfed0d816c58bf64ebf295f7f38f5cd1 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ab666ff ]

SOLR-12524: mention ids in CdcrLogReader.forwardSeek's assert


> CdcrBidirectionalTest.testBiDir() regularly fails
> -
>
> Key: SOLR-12524
> URL: https://issues.apache.org/jira/browse/SOLR-12524
> Project: Solr
>  Issue Type: Test
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12524.patch
>
>
> e.g. from 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4701/consoleText
> {code}
> [junit4] ERROR   20.4s J0 | CdcrBidirectionalTest.testBiDir <<<
> [junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=28371, 
> name=cdcr-replicator-11775-thread-1, state=RUNNABLE, 
> group=TGRP-CdcrBidirectionalTest]
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50:8F8E744E68278112]:0)
> [junit4]> Caused by: java.lang.AssertionError
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50]:0)
> [junit4]> at 
> org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
> [junit4]> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [junit4]> at java.lang.Thread.run(Thread.java:748)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12525) UnsupportedOperationException when running Solr 5.3 with JDK10

2018-06-28 Thread Ethan Li (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12525?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ethan Li updated SOLR-12525:

Description: 
Although Solr 5.3.1 document says that it runs with JDK7 or above, but when we 
are trying to use JDK10 to run Solr 5.3.1 and we are facing some problems:

We removed the following JAVA options in solr.in.sh as what SOLR suggest 
because it wont start:

UseConcMarkSweepGC
 UseParNewGC
 PrintHeapAtGC
 PrintGCDateStamps
 PrintGCTimeStamps
 PrintTenuringDistribution
 PrintGCApplicationStoppedTime

And the options left in solr.in.sh:
 # Enable verbose GC logging
 GC_LOG_OPTS="-verbose:gc -XX:+PrintGCDetails"

 # These GC settings have shown to work well for a number of common Solr 
workloads
 GC_TUNE="-XX:NewRatio=3 \
 -XX:SurvivorRatio=4 \
 -XX:TargetSurvivorRatio=90 \
 -XX:MaxTenuringThreshold=8 \
 -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
 -XX:+CMSScavengeBeforeRemark \
 -XX:PretenureSizeThreshold=64m \
 -XX:+UseCMSInitiatingOccupancyOnly \
 -XX:CMSInitiatingOccupancyFraction=50 \
 -XX:CMSMaxAbortablePrecleanTime=6000 \
 -XX:+CMSParallelRemarkEnabled \
 -XX:+ParallelRefProcEnabled"

After that SOLR runs but it got an error:

[0.001s][warning][gc] -Xloggc is deprecated. Will use 
-Xlog:gc:/solr/logs/solr_gc.log instead.
 [0.001s][warning][gc] -XX:+PrintGCDetails is deprecated. Will use -Xlog:gc* 
instead.
 [0.003s][info ][gc] Using Serial
 WARNING: System properties and/or JVM args set. Consider using --dry-run or 
--exec
 0 INFO (main) [ ] o.e.j.u.log Logging initialized @532ms
 205 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
 218 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
 220 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor 
[file:/home/solr/solr-5.3.1/server/contexts/|file:///home/solr/solr-5.3.1/server/contexts/]
 at interval 0
 559 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for 
/solr, did not find org.apache.jasper.servlet.JspServlet
 569 WARN (main) [ ] o.e.j.s.SecurityHandler 
ServletContext@o.e.j.w.WebAppContext@1a75e76a

{/solr,file:/home/solr/solr-5.3.1/server/solr-webapp/webapp/,STARTING}

{/home/solr/solr-5.3.1/server/solr-webapp/webapp} has uncovered http methods 
for path: /
 577 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
WebAppClassLoader=1904783235@7188af83
 625 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr 
(NoInitialContextEx)
 626 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property 
solr.solr.home: /solr/data
 627 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for 
directory: '/solr/data/'
 750 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from 
/solr/data/solr.xml
 817 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: 
/solr/data
 [1.402s][info ][gc] GC(0) Pause Full (Metadata GC Threshold) 85M->7M(490M) 
37.281ms
 875 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 1193398802
 875 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=/solr/data/]
 875 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: 
/solr/data/lib
 875 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) directory 
to add to classloader: lib (resolved as: /solr/data/lib).
 889 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with 
socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
 1036 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler 
HTTP client with params: socketTimeout=60=6=true
 1038 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
 1039 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j 
(org.slf4j.impl.Log4jLoggerFactory)]
 1040 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. 
Skipping setup for authorization module.
 1041 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
 1179 INFO (main) [ ] o.a.s.c.CoresLocator Looking for core definitions 
underneath /solr/data
 1180 INFO (main) [ ] o.a.s.c.CoresLocator Found 0 core definitions
 1185 INFO (main) [ ] o.a.s.s.SolrDispatchFilter 
user.dir=/home/solr/solr-5.3.1/server
 1186 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done
 1216 INFO (main) [ ] o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@1a75e76a{/solr,[file:/home/solr/solr-5.3.1/server/solr-webapp/webapp/,AVAILABLE|file:///home/solr/solr-5.3.1/server/solr-webapp/webapp/,AVAILABLE]}

{/home/solr/solr-5.3.1/server/solr-webapp/webapp}

1224 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector@2102a4d5

{HTTP/1.1}

{0.0.0.0:8983}

1228 INFO (main) [ ] o.e.j.s.Server Started @1762ms
 14426 WARN (qtp1045997582-15) 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 86 - Still unstable

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/86/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
events: [CapturedEvent{timestamp=30440024169784034, stage=STARTED, 
actionName='null', event={   "id":"6c25097567f5d2Tcypqol625ax13sg8v9hjklyq7",   
"source":"index_size_trigger2",   "eventTime":30440020039431634,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10001_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":30440024160063934, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, context={}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=30440024307906884, stage=BEFORE_ACTION, 
actionName='compute_plan', event={   
"id":"6c25097567f5d2Tcypqol625ax13sg8v9hjklyq7",   
"source":"index_size_trigger2",   "eventTime":30440020039431634,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10001_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":30440024160063934, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2}, 
config={   "trigger":"index_size_trigger2",   "stage":[ "STARTED", 
"ABORTED", "SUCCEEDED", "FAILED"],   "afterAction":[ 
"compute_plan", "execute_plan"],   
"class":"org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest$CapturingTriggerListener",
   "beforeAction":[ "compute_plan", "execute_plan"]}, message='null'}, 
CapturedEvent{timestamp=30440024380214534, stage=AFTER_ACTION, 
actionName='compute_plan', event={   
"id":"6c25097567f5d2Tcypqol625ax13sg8v9hjklyq7",   
"source":"index_size_trigger2",   "eventTime":30440020039431634,   
"eventType":"INDEXSIZE",   "properties":{ "__start__":1, 
"aboveSize":{"testSplitIntegration_collection":["{\"core_node1\":{\n
\"leader\":\"true\",\n\"SEARCHER.searcher.maxDoc\":25,\n
\"SEARCHER.searcher.deletedDocs\":0,\n\"INDEX.sizeInBytes\":22740,\n
\"node_name\":\"127.0.0.1:10001_solr\",\n\"type\":\"NRT\",\n
\"SEARCHER.searcher.numDocs\":25,\n\"__bytes__\":22740,\n
\"core\":\"testSplitIntegration_collection_shard1_replica_n1\",\n
\"__docs__\":25,\n\"violationType\":\"aboveDocs\",\n
\"state\":\"active\",\n\"INDEX.sizeInGB\":2.117827534675598E-5,\n
\"shard\":\"shard1\",\n
\"collection\":\"testSplitIntegration_collection\"}}"]}, "belowSize":{},
 "_enqueue_time_":30440024160063934, "requestedOps":["Op{action=SPLITSHARD, 
hints={COLL_SHARD=[{\n  \"first\":\"testSplitIntegration_collection\",\n  
\"second\":\"shard1\"}]}}"]}}, 
context={properties.operations=[{class=org.apache.solr.client.solrj.request.CollectionAdminRequest$SplitShard,
 method=GET, params.action=SPLITSHARD, 
params.collection=testSplitIntegration_collection, params.shard=shard1}], 
properties.BEFORE_ACTION=[compute_plan], source=index_size_trigger2, 
properties.AFTER_ACTION=[compute_plan]}, config={   
"trigger":"index_size_trigger2",   "stage":[ "STARTED", "ABORTED", 
"SUCCEEDED", "FAILED"],   "afterAction":[ "compute_plan", 
"execute_plan"],   

[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 693 - Still Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/693/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream

Error Message:
expected:<5> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<5> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([86855A71E212A480:A66F38717E5349CC]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest.testParallelCommitStream(StreamDecoratorTest.java:3025)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 16260 lines...]
   [junit4] Suite: 

[jira] [Reopened] (SOLR-12362) JSON loader should save the relationship of children

2018-06-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reopened SOLR-12362:
-

Un-resolving to potentially improve further based on off-topic comments in 
SOLR-12441 in which we think we don't want to force child documents to have a 
uniqueKey at the UpdateHandler layer (e.g. JSON syntax) to differentiate them 
from "extended field value syntax" for partial updates.  So the question then 
is how do we differentiate?

> JSON loader should save the relationship of children
> 
>
> Key: SOLR-12362
> URL: https://issues.apache.org/jira/browse/SOLR-12362
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.5
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> Once _childDocuments in SolrInputDocument is changed to a Map, JsonLoader 
> should add the child document to the map while saving its key name, to 
> maintain the child's relationship to its parent.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526311#comment-16526311
 ] 

David Smiley commented on SOLR-12441:
-

I was about to suggest a new issue for generating an ID but I guess it's 
in-scope as we're adding fields to child docs; the uniqueKey is another field 
to add if it's absent.  Although the related issue of differentiating a child 
doc from a partial update / extended value syntax was handled here: 
https://issues.apache.org/jira/browse/SOLR-12362?focusedCommentId=16502456=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16502456
  We can un-resolve that issue and do something different.  I think putting the 
conversation here is distracting to this issue.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12525) Solr 5.3 and JDK10 compatibility investigation

2018-06-28 Thread Ethan Li (JIRA)
Ethan Li created SOLR-12525:
---

 Summary: Solr 5.3 and JDK10 compatibility investigation
 Key: SOLR-12525
 URL: https://issues.apache.org/jira/browse/SOLR-12525
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 5.3.1
Reporter: Ethan Li


We are trying to use JDK10 to run Solr 5.3.1 and we are facing some problems:

We removed the following JAVA options in solr.in.sh as what SOLR suggest 
because it wont start:

UseConcMarkSweepGC
 UseParNewGC
 PrintHeapAtGC
 PrintGCDateStamps
 PrintGCTimeStamps
 PrintTenuringDistribution
 PrintGCApplicationStoppedTime

And the options left in solr.in.sh:
 # Enable verbose GC logging
 GC_LOG_OPTS="-verbose:gc -XX:+PrintGCDetails"

 # These GC settings have shown to work well for a number of common Solr 
workloads
 GC_TUNE="-XX:NewRatio=3 \
 -XX:SurvivorRatio=4 \
 -XX:TargetSurvivorRatio=90 \
 -XX:MaxTenuringThreshold=8 \
 -XX:ConcGCThreads=4 -XX:ParallelGCThreads=4 \
 -XX:+CMSScavengeBeforeRemark \
 -XX:PretenureSizeThreshold=64m \
 -XX:+UseCMSInitiatingOccupancyOnly \
 -XX:CMSInitiatingOccupancyFraction=50 \
 -XX:CMSMaxAbortablePrecleanTime=6000 \
 -XX:+CMSParallelRemarkEnabled \
 -XX:+ParallelRefProcEnabled"

After that SOLR runs but it got an error:

[0.001s][warning][gc] -Xloggc is deprecated. Will use 
-Xlog:gc:/solr/logs/solr_gc.log instead.
 [0.001s][warning][gc] -XX:+PrintGCDetails is deprecated. Will use -Xlog:gc* 
instead.
 [0.003s][info ][gc] Using Serial
 WARNING: System properties and/or JVM args set. Consider using --dry-run or 
--exec
 0 INFO (main) [ ] o.e.j.u.log Logging initialized @532ms
 205 INFO (main) [ ] o.e.j.s.Server jetty-9.2.11.v20150529
 218 WARN (main) [ ] o.e.j.s.h.RequestLogHandler !RequestLog
 220 INFO (main) [ ] o.e.j.d.p.ScanningAppProvider Deployment monitor 
[file:/home/solr/solr-5.3.1/server/contexts/|file:///home/solr/solr-5.3.1/server/contexts/]
 at interval 0
 559 INFO (main) [ ] o.e.j.w.StandardDescriptorProcessor NO JSP Support for 
/solr, did not find org.apache.jasper.servlet.JspServlet
 569 WARN (main) [ ] o.e.j.s.SecurityHandler 
ServletContext@o.e.j.w.WebAppContext@1a75e76a

{/solr,file:/home/solr/solr-5.3.1/server/solr-webapp/webapp/,STARTING} 
\{/home/solr/solr-5.3.1/server/solr-webapp/webapp} has uncovered http methods 
for path: /
 577 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init(): 
WebAppClassLoader=1904783235@7188af83
 625 INFO (main) [ ] o.a.s.c.SolrResourceLoader JNDI not configured for solr 
(NoInitialContextEx)
 626 INFO (main) [ ] o.a.s.c.SolrResourceLoader using system property 
solr.solr.home: /solr/data
 627 INFO (main) [ ] o.a.s.c.SolrResourceLoader new SolrResourceLoader for 
directory: '/solr/data/'
 750 INFO (main) [ ] o.a.s.c.SolrXmlConfig Loading container configuration from 
/solr/data/solr.xml
 817 INFO (main) [ ] o.a.s.c.CoresLocator Config-defined core root directory: 
/solr/data
 [1.402s][info ][gc] GC(0) Pause Full (Metadata GC Threshold) 85M->7M(490M) 
37.281ms
 875 INFO (main) [ ] o.a.s.c.CoreContainer New CoreContainer 1193398802
 875 INFO (main) [ ] o.a.s.c.CoreContainer Loading cores into CoreContainer 
[instanceDir=/solr/data/]
 875 INFO (main) [ ] o.a.s.c.CoreContainer loading shared library: 
/solr/data/lib
 875 WARN (main) [ ] o.a.s.c.SolrResourceLoader Can't find (or read) directory 
to add to classloader: lib (resolved as: /solr/data/lib).
 889 INFO (main) [ ] o.a.s.h.c.HttpShardHandlerFactory created with 
socketTimeout : 60,connTimeout : 6,maxConnectionsPerHost : 
20,maxConnections : 1,corePoolSize : 0,maximumPoolSize : 
2147483647,maxThreadIdleTime : 5,sizeOfQueue : -1,fairnessPolicy : 
false,useRetries : false,
 1036 INFO (main) [ ] o.a.s.u.UpdateShardHandler Creating UpdateShardHandler 
HTTP client with params: socketTimeout=60=6=true
 1038 INFO (main) [ ] o.a.s.l.LogWatcher SLF4J impl is 
org.slf4j.impl.Log4jLoggerFactory
 1039 INFO (main) [ ] o.a.s.l.LogWatcher Registering Log Listener [Log4j 
(org.slf4j.impl.Log4jLoggerFactory)]
 1040 INFO (main) [ ] o.a.s.c.CoreContainer Security conf doesn't exist. 
Skipping setup for authorization module.
 1041 INFO (main) [ ] o.a.s.c.CoreContainer No authentication plugin used.
 1179 INFO (main) [ ] o.a.s.c.CoresLocator Looking for core definitions 
underneath /solr/data
 1180 INFO (main) [ ] o.a.s.c.CoresLocator Found 0 core definitions
 1185 INFO (main) [ ] o.a.s.s.SolrDispatchFilter 
user.dir=/home/solr/solr-5.3.1/server
 1186 INFO (main) [ ] o.a.s.s.SolrDispatchFilter SolrDispatchFilter.init() done
 1216 INFO (main) [ ] o.e.j.s.h.ContextHandler Started 
o.e.j.w.WebAppContext@1a75e76a\{/solr,file:/home/solr/solr-5.3.1/server/solr-webapp/webapp/,AVAILABLE}{/home/solr/solr-5.3.1/server/solr-webapp/webapp}

1224 INFO (main) [ ] o.e.j.s.ServerConnector Started ServerConnector@2102a4d5


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22338 - Unstable!

2018-06-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22338/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=8111, 
name=cdcr-replicator-4183-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=8111, name=cdcr-replicator-4183-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([853F9D88BCE1E97E]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13382 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> 629687 INFO  
(SUITE-CdcrBidirectionalTest-seed#[853F9D88BCE1E97E]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_853F9D88BCE1E97E-001/init-core-data-001
   [junit4]   2> 629688 INFO  
(SUITE-CdcrBidirectionalTest-seed#[853F9D88BCE1E97E]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 629689 INFO  
(SUITE-CdcrBidirectionalTest-seed#[853F9D88BCE1E97E]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason="", ssl=0.0/0.0, value=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 629690 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[853F9D88BCE1E97E]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 629691 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[853F9D88BCE1E97E]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.cdcr.CdcrBidirectionalTest_853F9D88BCE1E97E-001/cdcr-cluster2-001
   [junit4]   2> 629691 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[853F9D88BCE1E97E]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 629691 INFO  (Thread-1803) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 629691 INFO  (Thread-1803) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 629695 ERROR (Thread-1803) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 629791 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[853F9D88BCE1E97E]) [] 
o.a.s.c.ZkTestServer start zk server on port:38531
   [junit4]   2> 629801 INFO  (zkConnectionManagerCallback-1811-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 629805 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 10+46
   [junit4]   2> 629817 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 629817 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 629817 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.session node0 Scavenging every 60ms
   [junit4]   2> 629817 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@1d9f5a71{/solr,null,AVAILABLE}
   [junit4]   2> 629819 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.AbstractConnector Started ServerConnector@1eac3417{SSL,[ssl, 
http/1.1]}{127.0.0.1:43799}
   [junit4]   2> 629819 INFO  (jetty-launcher-1808-thread-1) [] 
o.e.j.s.Server Started @629855ms
   [junit4]   2> 629819 INFO  (jetty-launcher-1808-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=43799}
   [junit4]   2> 629820 ERROR (jetty-launcher-1808-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   

[jira] [Commented] (SOLR-11735) TransformerFactory to support SolrCoreAware

2018-06-28 Thread Markus Jelsma (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526239#comment-16526239
 ] 

Markus Jelsma commented on SOLR-11735:
--

Thanks [~hlavki]! Now i can finally run unit test without packaging my own Solr 
patched artifact.



> TransformerFactory to support SolrCoreAware
> ---
>
> Key: SOLR-11735
> URL: https://issues.apache.org/jira/browse/SOLR-11735
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Markus Jelsma
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11735.patch
>
>
> Currently TransformerFactory does not support SolrCoreAware due to SOLR-8311.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-28 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526235#comment-16526235
 ] 

mosh commented on SOLR-12441:
-

It seems like we have three options:
 # Leave the generation of the childDoc Id as the responsibility of the user.
 # Only allow nested JSON docs inside an array.
 # Change the auto-guessing logic so we can support flat-style and nested-style 
JSON.

I will have to think about this one, it's a real head scratcher.
Do any of you have a preference?

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526208#comment-16526208
 ] 

Jan Høydahl commented on SOLR-12441:


Elastic will always index nested objects as plain flat fields on the main 
document unless the mapping (schema) [explicitly defines a particular json-path 
as 
"nested"|https://www.elastic.co/guide/en/elasticsearch/reference/current/nested.html].
 I think this explicit definition makes sense for several reasons. We also need 
to make sure that users don't index two docs where one is adding a simple value 
to the "myChildren" field while another document adds a nested document below 
the same field. So it sounds like schema should have a way to define 
{{nested=true}} for certain fields or path.to.field so that the URP can know 
how to interpret a doc. That would also remove the need for guessing based on 
presence of id field or whatever, you just ask the {{IndexSchema}}. 

We then also need to handle the case where a sub doc wants to use the same 
field name as a parent and those are different types, e.g.

{code:javascript}
{ "id": 1, 
  "name" : "john", 
  "address" : "London", 
  "child" : { 
"name" : "peter", 
"address" : { 
  "street" : "oxford st 3", 
  "zip" : "12345"}}}
{code}

In ES this is legal, since in the default type-guessing will create lucene 
fields "name", "address", "child.name", "child.address.street", 
"child.address.zip". And in case of nested docs I guess the "address" field 
name would not share the same type in the mapping.

So in order to tackle this we'd need to do some changes to auto-guessing logic 
as well as ability to use a fully qualified field name for the nested parts of 
a document, if we'd like to support both flat-style and nested-style from the 
same source document.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1056 - Still Failing

2018-06-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1056/

No tests ran.

Build Log:
[...truncated 24156 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2245 links (1794 relative) to 3131 anchors in 247 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-28 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526204#comment-16526204
 ] 

mosh commented on SOLR-12441:
-

{quote}{code:java}
{ "id": "X998_Y998", "from": { "name": "Peyton Manning", "id": "X18" }, 
"message": "Where's my contract?", "actions": [ { "name": "Comment", "link": 
"http://www.facebook.com/X998/posts/Y998; }, { "name": "Like", "link": 
"http://www.facebook.com/X998/posts/Y998; } ], "type": "status", 
"created_time": "2010-08-02T21:27:44+", "updated_time": 
"2010-08-02T21:27:44+" }
{code}
This is a sample Facebook API response. The array syntax will index the array 
as child documents, but it will not index the child document under the key 
"from"
{code:java}
 { "from": { "name": "Peyton Manning", "id": "X18" } } {code}
It would be nice if you could just index JSON as is, like you can in elastic 
search, moving the responsibility from the user to Solr itself.{quote}

Public APIs seem to use this pattern too.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-06-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526177#comment-16526177
 ] 

Jan Høydahl commented on SOLR-12441:


{quote}Perhaps we could specify in the documentation that these values can only 
be added in child documents which contain an id?
{quote}
That sounds fragile. In practice it means that no-one will trust the auto id 
feature because it can bomb on you anytime a sub doc contains some not-known 
field name. I like better a requirement for a list - guess that's the most 
common use of child docs anyway. Why would you want a single child doc, which 
could be expressed as fields on the main doc instead?

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12059) Unable to rename solr.xml

2018-06-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12059.

Resolution: Not A Problem

Closing as Not a problem. I cannot imagine a single reason for needing to 
rename solr.xml or other config files for that matter.

> Unable to rename solr.xml
> -
>
> Key: SOLR-12059
> URL: https://issues.apache.org/jira/browse/SOLR-12059
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5.1
> Environment: Renaming of solr,xml in the $SOLR_HOME directory
>Reporter: Edwin Yeo Zheng Lin
>Priority: Major
>
> I am able to rename the flie names like solrconfig.xml and solr.log to custom 
> names like myconfig.xml and my.log quite seamlessly. 
> However, I am not able to rename the same for solr.xml. Understand that the 
> solr.xml is hard-coded at the SolrXmlConfig.java. Meaning it requires a 
> re-compile of the Jar file in order to rename it.
> Since we can rename files like solrconfig.xml from the properties files, so 
> we should do the same for solr.xml?
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9268) Support adding/updating backup repository configurations via API

2018-06-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-9268:

Component/s: Backup/Restore

> Support adding/updating backup repository configurations via API
> 
>
> Key: SOLR-9268
> URL: https://issues.apache.org/jira/browse/SOLR-9268
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Reporter: Hrishikesh Gadre
>Priority: Major
> Attachments: SOLR-9268.patch
>
>
> Currently users need to manually modify solr.xml in Zookeeper to update the 
> configuration parameters (and restart Solr cluster). This is not quite user 
> friendly. We should provide an API to update this configuration. (This came 
> up during the discussions in SOLR-9242).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9268) Support adding/updating backup repository configurations via API

2018-06-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-9268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526155#comment-16526155
 ] 

Jan Høydahl commented on SOLR-9268:
---

Ready to move further with this?

Perhaps Solr 8.0 is a good time to get rid of {{solr.xml}} and replace it with 
some {{/clusterconfig.json}} file in ZK? Even if clusterProps now support more 
complex objects as values, I think it makes sense to leave clusterProps alone 
as more generic K/V props, and move all solr.xml stuff into a new config 
modelled after security.json. The file would then look something like e.g.
{code:javascript}
{
  "backup-repos" : [
{ 
  "class" : "solr.S3BackupRepository",
  "bucket" : "s3:/foo",
  "credentials" : { ... }
},
{
  "class" : "solr.AzureFilesRepostitory",
  ..
}
  ],
  "shardHandler" : {"class":"solr.HttpShardHandlerFactory" ...},
  "zookeeper" : { "zkClientTimeout" : ...}
{code}
 

> Support adding/updating backup repository configurations via API
> 
>
> Key: SOLR-9268
> URL: https://issues.apache.org/jira/browse/SOLR-9268
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Priority: Major
> Attachments: SOLR-9268.patch
>
>
> Currently users need to manually modify solr.xml in Zookeeper to update the 
> configuration parameters (and restart Solr cluster). This is not quite user 
> friendly. We should provide an API to update this configuration. (This came 
> up during the discussions in SOLR-9242).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9038) Support snapshot management functionality for a solr collection

2018-06-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-9038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16526146#comment-16526146
 ] 

Jan Høydahl commented on SOLR-9038:
---

This should be resolved, yes? CHANGES contains a reference to this Jira both in 
6.2.0 and in 6.4.0, please set fixed version accordingly [~yo...@apache.org], 
[~markrmil...@gmail.com]

> Support snapshot management functionality for a solr collection
> ---
>
> Key: SOLR-9038
> URL: https://issues.apache.org/jira/browse/SOLR-9038
> Project: Solr
>  Issue Type: New Feature
>  Components: SolrCloud
>Reporter: Hrishikesh Gadre
>Assignee: David Smiley
>Priority: Major
>
> Currently work is under-way to implement backup/restore API for Solr cloud 
> (SOLR-5750). SOLR-5750 is about providing an ability to "copy" index files 
> and collection metadata to a configurable location. 
> In addition to this, we should also provide a facility to create "named" 
> snapshots for Solr collection. Here by "snapshot" I mean configuring the 
> underlying Lucene IndexDeletionPolicy to not delete a specific commit point 
> (e.g. using PersistentSnapshotIndexDeletionPolicy). This should not be 
> confused with SOLR-5340 which implements core level "backup" functionality.
> The primary motivation of this feature is to decouple recording/preserving a 
> known consistent state of a collection from actually "copying" the relevant 
> files to a physically separate location. This decoupling have number of 
> advantages
> - We can use specialized data-copying tools for transferring Solr index 
> files. e.g. in Hadoop environment, typically 
> [distcp|https://hadoop.apache.org/docs/r1.2.1/distcp2.html] tool is used to 
> copy files from one location to other. This tool provides various options to 
> configure degree of parallelism, bandwidth usage as well as integration with 
> different types and versions of file systems (e.g. AWS S3, Azure Blob store 
> etc.)
> - This separation of concern would also help Solr to focus on the key 
> functionality (i.e. querying and indexing) while delegating the copy 
> operation to the tools built for that purpose.
> - Users can decide if/when to copy the data files as against creating a 
> snapshot. e.g. a user may want to create a snapshot of a collection before 
> making an experimental change (e.g. updating/deleting docs, schema change 
> etc.). If the experiment is successful, he can delete the snapshot (without 
> having to copy the files). If the experiment is failed, then he can copy the 
> files associated with the snapshot and restore.
> Note that Apache Blur project is also providing a similar feature 
> [BLUR-132|https://issues.apache.org/jira/browse/BLUR-132]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >