[GitHub] lucene-solr pull request #317: LUCENE-8145: OffsetsEnum is now unitary

2018-02-01 Thread jimczi
Github user jimczi commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/317#discussion_r165576425
  
--- Diff: 
lucene/highlighter/src/java/org/apache/lucene/search/uhighlight/OffsetsEnum.java
 ---
@@ -124,34 +123,122 @@ public PostingsEnum getPostingsEnum() {
 
 @Override
 public boolean nextPosition() throws IOException {
-  if (posCounter < postingsEnum.freq()) {
-posCounter++;
+  if (posCounter > 0) {
+posCounter--;
 postingsEnum.nextPosition(); // note: we don't need to save the 
position
 return true;
   } else {
 return false;
   }
 }
 
+@Override
+public BytesRef getTerm() throws IOException {
+  return term;
+}
+
+@Override
+public int startOffset() throws IOException {
+  return postingsEnum.startOffset();
+}
+
+@Override
+public int endOffset() throws IOException {
+  return postingsEnum.endOffset();
+}
+
 @Override
 public int freq() throws IOException {
-  return postingsEnum.freq();
+  return freq;
+}
+  }
+
+  public static final OffsetsEnum EMPTY = new OffsetsEnum() {
+@Override
+public boolean nextPosition() throws IOException {
+  return false;
 }
 
 @Override
 public BytesRef getTerm() throws IOException {
-  return term;
+  throw new UnsupportedOperationException();
 }
 
 @Override
 public int startOffset() throws IOException {
-  return postingsEnum.startOffset();
+  throw new UnsupportedOperationException();
 }
 
 @Override
 public int endOffset() throws IOException {
-  return postingsEnum.endOffset();
+  throw new UnsupportedOperationException();
+}
+
+@Override
+public int freq() throws IOException {
+  return 0;
+}
+
+  };
+
+  public static class MultiOffsetsEnum extends OffsetsEnum {
+
+private final PriorityQueue queue;
+private boolean started = false;
+
+public MultiOffsetsEnum(List inner) throws IOException {
+  this.queue = new PriorityQueue<>();
+  for (OffsetsEnum oe : inner) {
+if (oe.nextPosition())
+  this.queue.add(oe);
+  }
+}
+
+@Override
+public boolean nextPosition() throws IOException {
+  if (started == false) {
+started = true;
+return this.queue.size() > 0;
+  }
+  if (this.queue.size() > 0) {
+OffsetsEnum top = this.queue.poll();
+if (top.nextPosition()) {
+  this.queue.add(top);
+  return true;
+}
+else {
+  top.close();
+}
+return this.queue.size() > 0;
+  }
+  return false;
+}
+
+@Override
+public BytesRef getTerm() throws IOException {
+  return this.queue.peek().getTerm();
 }
 
+@Override
+public int startOffset() throws IOException {
+  return this.queue.peek().startOffset();
+}
+
+@Override
+public int endOffset() throws IOException {
+  return this.queue.peek().endOffset();
+}
+
+@Override
+public int freq() throws IOException {
+  return this.queue.peek().freq();
+}
+
+@Override
+public void close() throws IOException {
+  // most child enums will have been closed in .nextPosition()
+  // here all remaining non-exhausted enums are closed
+  IOUtils.close(queue);
+}
--- End diff --

++


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (SOLR-11848) User guide update on working with XML updates

2018-02-01 Thread Dariusz Wojtas (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dariusz Wojtas closed SOLR-11848.
-

> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.3
>
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 434 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/434/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

8 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001\testPostingsFormat.testExact-002:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001\testPostingsFormat.testExact-002

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001\testPostingsFormat.testExact-002:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001\testPostingsFormat.testExact-002
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.blockterms.TestVarGapDocFreqIntervalPostingsFormat_767EAC9DF4E83CBD-001

at __randomizedtesting.SeedInfo.seed([767EAC9DF4E83CBD]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.lucene.index.TestDemoParallelLeafReader.testRandomMultipleSchemaGensSameField

Error Message:
java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestDemoParallelLeafReader_D59C6D6C1511C440-001\tempDir-005\segs\8gdeqevih9gdqp4g1yzfs5ymc_4

Stack Trace:
java.lang.RuntimeException: java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J1\temp\lucene.index.TestDemoParallelLeafReader_D59C6D6C1511C440-001\tempDir-005\segs\8gdeqevih9gdqp4g1yzfs5ymc_4
at 
__randomizedtesting.SeedInfo.seed([D59C6D6C1511C440:85CD66ADE2E79559]:0)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader$1.wrap(TestDemoParallelLeafReader.java:203)
at 
org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:56)
at 
org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.access$000(FilterDirectoryReader.java:51)
at 
org.apache.lucene.index.FilterDirectoryReader.(FilterDirectoryReader.java:83)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader.(TestDemoParallelLeafReader.java:195)
at 
org.apache.lucene.index.TestDemoParallelLeafReader$ReindexingReader$ParallelLeafDirectoryReader.doWrapDirectoryReader(TestDemoParallelLeafReader.java:211)
at 
org.apache.lucene.index.FilterDirectoryReader.wrapDirectoryReader(FilterDirectoryReader.java:99)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 347 - Still Unstable

2018-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/347/

16 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV1

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([72C7600CE6E2864C:A3A7DEC5C64E9515]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.AliasIntegrationTest.checkFooAndBarMeta(AliasIntegrationTest.java:283)
at 
org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV1(AliasIntegrationTest.java:251)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.AliasIntegrationTest.testModifyMetadataV2

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([72C7600CE6E2864C:FF6A72BEF851AED7]:0)
at 

[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-10-ea+41) - Build # 1279 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1279/
Java: 64bit/jdk-10-ea+41 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at https://127.0.0.1:42417/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/6)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{   
"range":"8000-d554",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"awhollynewcollection_0_shard1_replica_n1", 
  "base_url":"https://127.0.0.1:42371/solr;,   
"node_name":"127.0.0.1:42371_solr",   "state":"active",   
"type":"NRT",   "leader":"true"}, "core_node5":{   
"core":"awhollynewcollection_0_shard1_replica_n2",   
"base_url":"https://127.0.0.1:42417/solr;,   
"node_name":"127.0.0.1:42417_solr",   "state":"active",   
"type":"NRT"}}}, "shard2":{   "range":"d555-2aa9",   
"state":"active",   "replicas":{ "core_node7":{   
"core":"awhollynewcollection_0_shard2_replica_n4",   
"base_url":"https://127.0.0.1:41167/solr;,   
"node_name":"127.0.0.1:41167_solr",   "state":"active",   
"type":"NRT"}, "core_node9":{   
"core":"awhollynewcollection_0_shard2_replica_n6",   
"base_url":"https://127.0.0.1:36807/solr;,   
"node_name":"127.0.0.1:36807_solr",   "state":"down",   
"type":"NRT"}}}, "shard3":{   "range":"2aaa-7fff",   
"state":"active",   "replicas":{ "core_node11":{   
"core":"awhollynewcollection_0_shard3_replica_n8",   
"base_url":"https://127.0.0.1:42371/solr;,   
"node_name":"127.0.0.1:42371_solr",   "state":"active",   
"type":"NRT"}, "core_node12":{   
"core":"awhollynewcollection_0_shard3_replica_n10",   
"base_url":"https://127.0.0.1:42417/solr;,   
"node_name":"127.0.0.1:42417_solr",   "state":"active",   
"type":"NRT",   "leader":"true",   "router":{"name":"compositeId"}, 
  "maxShardsPerNode":"2",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} with live_nodes=[127.0.0.1:41167_solr, 
127.0.0.1:36807_solr, 127.0.0.1:42417_solr, 127.0.0.1:42371_solr]

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at https://127.0.0.1:42417/solr/awhollynewcollection_0: No 
registered leader was found after waiting for 4000ms , collection: 
awhollynewcollection_0 slice: shard2 saw 
state=DocCollection(awhollynewcollection_0//collections/awhollynewcollection_0/state.json/6)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-d554",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"awhollynewcollection_0_shard1_replica_n1",
  "base_url":"https://127.0.0.1:42371/solr;,
  "node_name":"127.0.0.1:42371_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node5":{
  "core":"awhollynewcollection_0_shard1_replica_n2",
  "base_url":"https://127.0.0.1:42417/solr;,
  "node_name":"127.0.0.1:42417_solr",
  "state":"active",
  "type":"NRT"}}},
"shard2":{
  "range":"d555-2aa9",
  "state":"active",
  "replicas":{
"core_node7":{
  "core":"awhollynewcollection_0_shard2_replica_n4",
  "base_url":"https://127.0.0.1:41167/solr;,
  "node_name":"127.0.0.1:41167_solr",
  "state":"active",
  "type":"NRT"},
"core_node9":{
  "core":"awhollynewcollection_0_shard2_replica_n6",
  "base_url":"https://127.0.0.1:36807/solr;,
  "node_name":"127.0.0.1:36807_solr",
  "state":"down",
  "type":"NRT"}}},
"shard3":{
  "range":"2aaa-7fff",
  "state":"active",
  "replicas":{
"core_node11":{
  "core":"awhollynewcollection_0_shard3_replica_n8",
  "base_url":"https://127.0.0.1:42371/solr;,
  "node_name":"127.0.0.1:42371_solr",
  "state":"active",
  "type":"NRT"},
"core_node12":{
  "core":"awhollynewcollection_0_shard3_replica_n10",
  "base_url":"https://127.0.0.1:42417/solr;,
  "node_name":"127.0.0.1:42417_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"} with live_nodes=[127.0.0.1:41167_solr, 

[jira] [Resolved] (SOLR-11471) MoveReplicaSuggester should suggest to move a leader only if no other replica is available

2018-02-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11471.
---
Resolution: Duplicate

> MoveReplicaSuggester should suggest to move a leader only if no other replica 
> is available
> --
>
> Key: SOLR-11471
> URL: https://issues.apache.org/jira/browse/SOLR-11471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11471) MoveReplicaSuggester should suggest to move a leader only if no other replica is available

2018-02-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-11471:
-

Assignee: Noble Paul

> MoveReplicaSuggester should suggest to move a leader only if no other replica 
> is available
> --
>
> Key: SOLR-11471
> URL: https://issues.apache.org/jira/browse/SOLR-11471
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-9.0.1) - Build # 7150 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7150/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.codecs.memory.TestFSTPostingsFormat

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002\_0_FST50_0.pay:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002\_0_FST50_0.pay

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002\_0_FST50_0.pay:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001\testPostingsFormat-002\_0_FST50_0.pay
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\codecs\test\J0\temp\lucene.codecs.memory.TestFSTPostingsFormat_FBDCB05DA3E02891-001

at __randomizedtesting.SeedInfo.seed([FBDCB05DA3E02891]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([C8E6E5493C3CA74F:40B2DA9392C0CAB7]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 

[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2018-02-01 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349816#comment-16349816
 ] 

Amrit Sarkar commented on SOLR-11724:
-

[~WebHomer],

Can you elaborate you cluster details, number of shards, number of replicas. I 
am not able to comprehend the above statement of half data being searchable. If 
something use to work for 6.1 and not working for 7.2 is a concern. If the 
replicators are started successfully (CDCR=START), there is no reason 
whatsoever the data won't get replicated to target to either shards and won't 
be searchable after commit.

Here, if not a single document is indexed in source after bootstrapping done on 
target. The replicas of the target collections which are non-leader will have 
empty index. To ignore this issue, having 1 replica each on target is the 
immediate fix, as one will not see empty indexes on either cores, let me know 
if the above doesn't make sense.

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11925) Auto delete oldest collections in a time routed alias

2018-02-01 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349809#comment-16349809
 ] 

David Smiley commented on SOLR-11925:
-

The attached patch deletes old collections immediately before creating a new 
collection.  Includes test.  The only nocommit is to rename 
RoutedAliasCreateCollectionCmd to MaintainRoutedAliasCmd since it doesn't just 
create now.

Misc Time Routed Alias changes:
* 
org.apache.solr.client.solrj.request.CollectionAdminRequest.ModifyAlias#addMetadata
 now returns "this" to support chained method calls

Included with this patch are some internal refactorings/improvements:
* CollectionsHandler.handleResponse is now named sendToOCPQueue (I chose the 
same name as operation.sendToOCPQueue boolean field).  The API contract of this 
method is simpler to understand as well; it does not take in the 
SolrQueryResponse.
* Added SolrResponse.getException that parses out the "exception" with response 
code & message to create a SolrException.  This logic used to be in 
handleResponse/sendToOCPQueue but it's generally useful and helped enable the 
improved method signature.  Two of sendToOCPQueue()'s callers use this method.
* DateMathParser now has a constructor that takes *both* "now" and the 
timezone; formerly it only had one for the timezone and it required that you 
call setNow if you wanted to do that.  I felt that was a weird asymmetry; both 
can be done through the constructor now.
 

> Auto delete oldest collections in a time routed alias
> -
>
> Key: SOLR-11925
> URL: https://issues.apache.org/jira/browse/SOLR-11925
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11925.patch
>
>
> The oldest collections in a Time Routed Alias should be automatically 
> deleted, according to a new alias metadata that establishes how long.  It can 
> be checked as new data flows in at TimeRoutedAliasUpdateProcessor and thus it 
> won't occur if new data isn't coming in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11925) Auto delete oldest collections in a time routed alias

2018-02-01 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11925?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11925:

Attachment: SOLR-11925.patch

> Auto delete oldest collections in a time routed alias
> -
>
> Key: SOLR-11925
> URL: https://issues.apache.org/jira/browse/SOLR-11925
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Fix For: 7.3
>
> Attachments: SOLR-11925.patch
>
>
> The oldest collections in a Time Routed Alias should be automatically 
> deleted, according to a new alias metadata that establishes how long.  It can 
> be checked as new data flows in at TimeRoutedAliasUpdateProcessor and thus it 
> won't occur if new data isn't coming in.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21380 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21380/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([F71234E2E4A41B0E:7F460B384A5876F6]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13421 lines...]
   [junit4] Suite: org.apache.solr.cloud.ReplaceNodeNoTargetTest
   [junit4]   2> 1880757 INFO  
(SUITE-ReplaceNodeNoTargetTest-seed#[F71234E2E4A41B0E]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   

[jira] [Resolved] (SOLR-10029) Fix search link on http://lucene.apache.org/

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10029.
--
Resolution: Cannot Reproduce

The project names may be lower case, but it doesn't appear to be causing any 
problems (perhaps it did in the past, but it's not now). Resolving as Can't 
Reproduce.

> Fix search link on http://lucene.apache.org/
> 
>
> Key: SOLR-10029
> URL: https://issues.apache.org/jira/browse/SOLR-10029
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Reporter: Tien Nguyen Manh
>Priority: Major
> Attachments: SOLR-10029.patch
>
>
> The current link to http://search-lucene.com is
> http://search-lucene.com/lucene?q=apache=sl
> http://search-lucene.com/solr?q=apache
> The project names  should be upcase
> http://search-lucene.com/Lucene?q=apache=sl
> http://search-lucene.com/Solr?q=apache



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10057) Evaluate/Design a MatchMost/MatchAll doc set

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10057:
-
Issue Type: New Feature  (was: Bug)

> Evaluate/Design a MatchMost/MatchAll doc set
> 
>
> Key: SOLR-10057
> URL: https://issues.apache.org/jira/browse/SOLR-10057
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
>Priority: Major
>
> Per discussion in SOLR-9764 Design a memory efficient DocSet if a query 
> returns all docs, if a DocSet matches most of documents, current 
> implementation using BitDocSet can be optimized. This JIRA is to evaluate and 
> design an optimized DocSet for these use cases. 
> The basic idea is to use an inverted integer list to enumerate all doc ids 
> that doesn't match. If the DocSet matches most of docs, it can be pretty 
> efficient. Meanwhile other ideas are welcome.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10058) Evaluate/Design a DocSet using Lucene RoaringIdDocSet

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10058:
-
Issue Type: New Feature  (was: Bug)

> Evaluate/Design a DocSet using Lucene RoaringIdDocSet 
> --
>
> Key: SOLR-10058
> URL: https://issues.apache.org/jira/browse/SOLR-10058
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Sun
>Priority: Major
>
> Per discussion in SOLR-9764 Design a memory efficient DocSet if a query 
> returns all docs, Lucene RoaringIdDocSet can be memory efficient in a smart 
> way, at the cost of some extra cpu cycles depending on use case and may be a 
> good choice for some use cases. This JIRA is to evaluate and design a DocSet 
> to take advantage of Lucene RoaringIdDocSet and understand the trade off.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8152) Simplify conditionals in JoinUtil

2018-02-01 Thread Horatiu Lazu (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349753#comment-16349753
 ] 

Horatiu Lazu commented on LUCENE-8152:
--

Similar patterns seem to exist in other parts of the code, as well

> Simplify conditionals in JoinUtil 
> --
>
> Key: LUCENE-8152
> URL: https://issues.apache.org/jira/browse/LUCENE-8152
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Horatiu Lazu
>Priority: Trivial
> Attachments: LUCENE-8152.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following could be simplified, on line 249:
> {code:java}
> int dvDocID = numericDocValues.docID();
> if (dvDocID < doc) {
>   dvDocID = numericDocValues.advance(doc);
> }
> long value;
> if (dvDocID == doc) {
>   value = numericDocValues.longValue();
> } else {
>   value = 0;
> }
> {code}
> To:
> {code:java}
> long value = 0;
> if (numericDocValues.advanceExact(doc)) {
>   value = numericDocValues.longValue();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10481) Extend Solr sql data type support

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10481:
-
Issue Type: Improvement  (was: Bug)

> Extend Solr sql data type support
> -
>
> Key: SOLR-10481
> URL: https://issues.apache.org/jira/browse/SOLR-10481
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Michael Suzuki
>Priority: Major
>
> Extend Solr SQL support for different field types.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10488) Remove @file from from the examples

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349744#comment-16349744
 ] 

Cassandra Targett commented on SOLR-10488:
--

This issue and SOLR-11373 seem almost duplicates, so linking them together as 
at least related.

>  Remove @file from  from the examples
> -
>
> Key: SOLR-10488
> URL: https://issues.apache.org/jira/browse/SOLR-10488
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> This is commented out in the solrconfig.xml
> {{false}}
> So when I go to enable it by making it true I get this error while creating a 
> core
> {code}
> Caused by: org.apache.solr.common.SolrException: Error loading solr config 
> from /solr-6.5.0/server/solr/test/conf/solrconfig.xml
>   at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:188)
>   at 
> org.apache.solr.core.ConfigSetService.createSolrConfig(ConfigSetService.java:96)
>   at 
> org.apache.solr.core.ConfigSetService.getConfig(ConfigSetService.java:77)
>   ... 39 more
> Caused by: java.lang.IllegalArgumentException: Remove @file from  
> to output messages to solr's logfile
>   at 
> org.apache.solr.update.SolrIndexConfig.(SolrIndexConfig.java:189)
>   at org.apache.solr.core.SolrConfig.(SolrConfig.java:232)
>   at 
> org.apache.solr.core.SolrConfig.readFromResourceLoader(SolrConfig.java:180)
>   ... 41 more
> {code}
> So the solrconfig files that get shipped with solr should remove the file 
> attribute and say that this is controlled by the log4j.properties file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10498) Update AngularJS Admin UI Page copy

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10498.
--
Resolution: Cannot Reproduce

The 'i' icon is gone since the old UI was removed, so this issue doesn't need 
any further work.

> Update AngularJS Admin UI Page copy
> ---
>
> Key: SOLR-10498
> URL: https://issues.apache.org/jira/browse/SOLR-10498
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 6.5
>Reporter: David Lee
>Priority: Trivial
>
> At https://wiki.apache.org/solr/AngularUI which is linked to from the "i" 
> icon next to the "user original UI" link in upper right.
> The component "Web gui" does not exist in the SOLR project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10563) One of the replica is marked as down for only one collection in the cluster

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10563.
--
Resolution: Incomplete

Resolving as Incomplete without further information.

> One of the replica is marked as  down for only one collection in the cluster 
> -
>
> Key: SOLR-10563
> URL: https://issues.apache.org/jira/browse/SOLR-10563
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Vijaya Jonnakuti
>Priority: Major
>
> One of the replica is marked as  down for only one collection in the cluster .
> and only recovers only when REQUESTRECOVERY is invoked..
> Is there any configuration that can automatically do the retry?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8149) Document why NRTCachingDirectory needs to preemptively delete segment files.

2018-02-01 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349708#comment-16349708
 ] 

Erick Erickson commented on LUCENE-8149:


FWIW, simply taking that code out passes all the test suite. What I have _not_ 
tested is things like killing Solr while actively indexing and the like. I'll 
be able to do some of that ad-hoc type testing Friday

> Document why NRTCachingDirectory needs to preemptively delete segment files.
> 
>
> Key: LUCENE-8149
> URL: https://issues.apache.org/jira/browse/LUCENE-8149
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> Moving over here from SOLR-11892. After getting through my confusion I've 
> found the following: NRTCachingDirectory.createOutput tries to delete segment 
> files before creating them. We should at least add a bit of commentary as to 
> what the pre-emptive delete is there for since on the surface it's not 
> obvious.
> try {
>   in.deleteFile(name);
> {{ } catch (IOException ioe) {}}
>   // This is fine: file may not exist
> {{ }}}
> If I change to using MMapDirectory or NIOFSDirectory these exceptions are not 
> thrown. What's special about NRTCachingDirectory that it needs this when two 
> of the possible underlying FS implementations apparently do not? Or is this 
> necessary for, say, Windows or file systems than the two I tried? Or is it 
> some interaction between the RAM based segments and segments on disk?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 938 - Still Failing

2018-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/938/

No tests ran.

Build Log:
[...truncated 28244 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (38.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 30.2 MB in 0.02 sec (1263.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 73.0 MB in 0.07 sec (980.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 83.5 MB in 0.08 sec (1003.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6241 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6241 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (65.5 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 52.6 MB in 0.05 sec (1004.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 151.5 MB in 0.14 sec (1050.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 152.6 MB in 0.16 sec (972.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] *** [WARN] *** Your open file limit is currently 6.  
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or solr.in.sh
   [smoker] *** [WARN] ***  Your Max Processes Limit is currently 10240. 
   [smoker]  It should be set to 65000 to avoid operational disruption. 
   [smoker]  If you no longer wish to see this warning, set SOLR_ULIMIT_CHECKS 
to false in your profile or 

[jira] [Resolved] (SOLR-10762) The index time boosting is getting nullified during atomic updates

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10762?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-10762.
--
Resolution: Won't Fix

Index-time boosts have gone away.

> The index time boosting is getting nullified during atomic updates
> --
>
> Key: SOLR-10762
> URL: https://issues.apache.org/jira/browse/SOLR-10762
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: pallavi
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10780) A new collection property autoRebalanceLeaders

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-10780:
-
Issue Type: New Feature  (was: Bug)

> A new collection property autoRebalanceLeaders 
> ---
>
> Key: SOLR-10780
> URL: https://issues.apache.org/jira/browse/SOLR-10780
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Noble Paul
>Priority: Major
>
> In solrcloud , the first replica to get started in a given shard becomes the 
> leader of that shard. This is a problem during cluster restarts. the first 
> node to get started have al leaders and that node ends up being very heavily 
> loaded. The solution we have today is to invoke a REBALANCELEADERS command 
> explicitly so that the system ends up with  a uniform distribution of leaders 
> across nodes. This is a manual operation and we can make the system do it 
> automatically. 
> so each collection can have an {{autoRebalanceLeaders}} flag . If it is set 
> to true whenever a replica becomes {{ACTIVE}} in a shard , a 
> {{REBALANCELEADER}} is invoked for that shard 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1278 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1278/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeLostTriggerRestoreState

Error Message:
The trigger did not fire at all

Stack Trace:
java.lang.AssertionError: The trigger did not fire at all
at 
__randomizedtesting.SeedInfo.seed([F8655AEAAE497F5A:D39A8FB134316A8A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testNodeLostTriggerRestoreState(TriggerIntegrationTest.java:368)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testMetricTrigger

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([F8655AEAAE497F5A:42696D65F1A1A915]:0)
at 

[jira] [Commented] (SOLR-10855) Null pointer exceptions in CartesianProductStream toExpression and explain methods

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349697#comment-16349697
 ] 

Cassandra Targett commented on SOLR-10855:
--

[~joel.bernstein], can this one be resolved?

> Null pointer exceptions in CartesianProductStream toExpression and explain 
> methods
> --
>
> Key: SOLR-10855
> URL: https://issues.apache.org/jira/browse/SOLR-10855
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-10855.patch
>
>
> Currently if a sort is not specified with the cartesianProduct function the 
> toExpression and explain methods produce null pointers.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11938) Slave data is deleted when replication in Master is disabled

2018-02-01 Thread Den (JIRA)
Den created SOLR-11938:
--

 Summary: Slave data is deleted when replication in Master is 
disabled
 Key: SOLR-11938
 URL: https://issues.apache.org/jira/browse/SOLR-11938
 Project: Solr
  Issue Type: Bug
Affects Versions: 7.1
Reporter: Den
 Attachments: solr_slave_log.png

In master-slave replication. When replication in master is disabled using UI or 
URL (server/solr/solr_core/replication?command=disablereplication), slaves 
triggers delete on its data. Attached is the logs on the slave after disabling 
replication in master.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-4198) Allow codecs to index term impacts

2018-02-01 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-4198:
---
Attachment: 
TestSimpleTextPostingsFormat.asf.nightly.master.1466.consoleText.excerpt.txt

TestSimpleTextPostingsFormat.sarowe.jenkins.nightly.master.681.consoleText.excerpt.txt

> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-4198-BMW.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198_flush.patch, 
> TestSimpleTextPostingsFormat.asf.nightly.master.1466.consoleText.excerpt.txt, 
> TestSimpleTextPostingsFormat.sarowe.jenkins.nightly.master.681.consoleText.excerpt.txt
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-4198) Allow codecs to index term impacts

2018-02-01 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-4198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349648#comment-16349648
 ] 

Steve Rowe commented on LUCENE-4198:


{{TestSimpleTextPostingsFormat}} has failed reproducibly a couple times in 
Jenkins today (ASF and my Jenkins), and {{git bisect}} blames the {{f410df8}} 
commit on this issue for the one from my Jenkins.  

I'll attach the relevant (voluminous) output from 
[https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1466/] and from 
my Jenkins.


> Allow codecs to index term impacts
> --
>
> Key: LUCENE-4198
> URL: https://issues.apache.org/jira/browse/LUCENE-4198
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: core/index
>Reporter: Robert Muir
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: LUCENE-4198-BMW.patch, LUCENE-4198.patch, 
> LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, LUCENE-4198.patch, 
> LUCENE-4198_flush.patch
>
>
> Subtask of LUCENE-4100.
> Thats an example of something similar to impact indexing (though, his 
> implementation currently stores a max for the entire term, the problem is the 
> same).
> We can imagine other similar algorithms too: I think the codec API should be 
> able to support these.
> Currently it really doesnt: Stefan worked around the problem by providing a 
> tool to 'rewrite' your index, he passes the IndexReader and Similarity to it. 
> But it would be better if we fixed the codec API.
> One problem is that the Postings writer needs to have access to the 
> Similarity. Another problem is that it needs access to the term and 
> collection statistics up front, rather than after the fact.
> This might have some cost (hopefully minimal), so I'm thinking to experiment 
> in a branch with these changes and see if we can make it work well.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.1) - Build # 21379 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21379/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([3D413FF6EE589A85:B515002C40A4F77D]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.TriggerIntegrationTest.testEventQueue

Error Message:
org.apache.solr.cloud.autoscaling.SearchRateTrigger$SearchRateEvent cannot be 
cast to org.apache.solr.cloud.autoscaling.NodeAddedTrigger$NodeAddedEvent

Stack Trace:
java.lang.ClassCastException: 

[JENKINS] Lucene-Solr-Tests-master - Build # 2291 - Unstable

2018-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2291/

8 tests failed.
FAILED:  org.apache.solr.cloud.HttpPartitionTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([C949A6C033D8021E]:0)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([C949A6C033D8021E]:0)


FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Collection not found: c8n_1x3_lf

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: c8n_1x3_lf
at 
__randomizedtesting.SeedInfo.seed([C949A6C033D8021E:411D991A9D246FE6]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:688)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:675)
at 
org.apache.solr.cloud.HttpPartitionTest.sendDoc(HttpPartitionTest.java:664)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:67)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Updated] (LUCENE-8152) Simplify conditionals in JoinUtil

2018-02-01 Thread Horatiu Lazu (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Horatiu Lazu updated LUCENE-8152:
-
Attachment: LUCENE-8152.patch

> Simplify conditionals in JoinUtil 
> --
>
> Key: LUCENE-8152
> URL: https://issues.apache.org/jira/browse/LUCENE-8152
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Horatiu Lazu
>Priority: Trivial
> Attachments: LUCENE-8152.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The following could be simplified, on line 249:
> {code:java}
> int dvDocID = numericDocValues.docID();
> if (dvDocID < doc) {
>   dvDocID = numericDocValues.advance(doc);
> }
> long value;
> if (dvDocID == doc) {
>   value = numericDocValues.longValue();
> } else {
>   value = 0;
> }
> {code}
> To:
> {code:java}
> long value = 0;
> if (numericDocValues.advanceExact(doc)) {
>   value = numericDocValues.longValue();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #319: LUCENE-8152: Simplify conditional logic using...

2018-02-01 Thread MathBunny
GitHub user MathBunny opened a pull request:

https://github.com/apache/lucene-solr/pull/319

LUCENE-8152: Simplify conditional logic using advanceExact



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MathBunny/lucene-solr lucene-8152

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/319.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #319


commit 5df275357fe567d5cfebe3a9f114955baa6e68ad
Author: MathBunny 
Date:   2018-02-02T00:40:00Z

LUCENE-8152: Simplify conditional logic using advanceExact




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8152) Simplify conditionals in JoinUtil

2018-02-01 Thread Horatiu Lazu (JIRA)
Horatiu Lazu created LUCENE-8152:


 Summary: Simplify conditionals in JoinUtil 
 Key: LUCENE-8152
 URL: https://issues.apache.org/jira/browse/LUCENE-8152
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Horatiu Lazu


The following could be simplified, on line 249:
{code:java}
int dvDocID = numericDocValues.docID();
if (dvDocID < doc) {
  dvDocID = numericDocValues.advance(doc);
}
long value;
if (dvDocID == doc) {
  value = numericDocValues.longValue();
} else {
  value = 0;
}
{code}
To:
{code:java}
long value = 0;
if (numericDocValues.advanceExact(doc)) {
  value = numericDocValues.longValue();
}
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #318: LUCENE-8151: Use advanceExact removing redund...

2018-02-01 Thread MathBunny
GitHub user MathBunny opened a pull request:

https://github.com/apache/lucene-solr/pull/318

LUCENE-8151: Use advanceExact removing redundant conditionals



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/MathBunny/lucene-solr master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/318.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #318


commit 94d77a820bc8d7b89d30bed6ca8bf366153d9d40
Author: MathBunny 
Date:   2018-02-02T00:18:36Z

LUCENE-8151: Use advanceExact removing redundant conditionals




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8151) Redundant conditionals in JoinUtil

2018-02-01 Thread Horatiu Lazu (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Horatiu Lazu updated LUCENE-8151:
-
Attachment: LUCENE-8151.patch

> Redundant conditionals in JoinUtil
> --
>
> Key: LUCENE-8151
> URL: https://issues.apache.org/jira/browse/LUCENE-8151
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Horatiu Lazu
>Priority: Trivial
> Attachments: LUCENE-8151.patch
>
>
> JoinUtil has a strange conditional structure, which can be collapsed under 
> one function call:
> On line 200:
> {code:java}
> @Override
> public void collect(int doc) throws IOException {
>   if (doc > sortedNumericDocValues.docID()) {
> sortedNumericDocValues.advance(doc);
>   }
>   if (doc == sortedNumericDocValues.docID()) {
> for (int i = 0; i < sortedNumericDocValues.docValueCount(); i++) {
>   long value = sortedNumericDocValues.nextValue();
>   joinValues.add(value);
>   if (needsScore) {
> scoreAggregator.accept(value, scorer.score());
>   }
> }
>   }
> }{code}
> Instead, just do advanceExact, which returns a boolean indicating if it was 
> successful:
> {code:java}
> @Override
> public void collect(int doc) throws IOException {
>   if (sortedNumericDocValues.advanceExact(doc)) {
> for (int i = 0; i < sortedNumericDocValues.docValueCount(); i++) {
>   long value = sortedNumericDocValues.nextValue();
>   joinValues.add(value);
>   if (needsScore) {
> scoreAggregator.accept(value, scorer.score());
>   }
> }
>   }
> }
> {code}
>  I have the patch ready, it passes unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8151) Redundant conditionals in JoinUtil

2018-02-01 Thread Horatiu Lazu (JIRA)
Horatiu Lazu created LUCENE-8151:


 Summary: Redundant conditionals in JoinUtil
 Key: LUCENE-8151
 URL: https://issues.apache.org/jira/browse/LUCENE-8151
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Horatiu Lazu


JoinUtil has a strange conditional structure, which can be collapsed under one 
function call:

On line 200:
{code:java}
@Override
public void collect(int doc) throws IOException {
  if (doc > sortedNumericDocValues.docID()) {
sortedNumericDocValues.advance(doc);
  }
  if (doc == sortedNumericDocValues.docID()) {
for (int i = 0; i < sortedNumericDocValues.docValueCount(); i++) {
  long value = sortedNumericDocValues.nextValue();
  joinValues.add(value);
  if (needsScore) {
scoreAggregator.accept(value, scorer.score());
  }
}
  }
}{code}
Instead, just do advanceExact, which returns a boolean indicating if it was 
successful:
{code:java}
@Override
public void collect(int doc) throws IOException {
  if (sortedNumericDocValues.advanceExact(doc)) {
for (int i = 0; i < sortedNumericDocValues.docValueCount(); i++) {
  long value = sortedNumericDocValues.nextValue();
  joinValues.add(value);
  if (needsScore) {
scoreAggregator.accept(value, scorer.score());
  }
}
  }
}

{code}
 I have the patch ready, it passes unit tests.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 431 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/431/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([F81329981BAC393F:70471642B55054C7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-02-01 Thread Rupa Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349485#comment-16349485
 ] 

Rupa Shankar commented on SOLR-11277:
-

Great point, thanks [~varunthacker]! Updated

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-11277.patch, SOLR-11277.patch, 
> max_size_auto_commit.patch
>
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11277) Add auto hard commit setting based on tlog size

2018-02-01 Thread Rupa Shankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11277?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rupa Shankar updated SOLR-11277:

Attachment: SOLR-11277.patch

> Add auto hard commit setting based on tlog size
> ---
>
> Key: SOLR-11277
> URL: https://issues.apache.org/jira/browse/SOLR-11277
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Rupa Shankar
>Assignee: Anshum Gupta
>Priority: Major
> Attachments: SOLR-11277.patch, SOLR-11277.patch, 
> max_size_auto_commit.patch
>
>
> When indexing documents of variable sizes and at variable schedules, it can 
> be hard to estimate the optimal auto hard commit maxDocs or maxTime settings. 
> We’ve had some occurrences of really huge tlogs, resulting in serious issues, 
> so in an attempt to avoid this, it would be great to have a “maxSize” setting 
> based on the tlog size on disk. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11237) Admin UI replication tab iterations sorted in datetime order

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11237:
-
Component/s: Admin UI

> Admin UI replication tab iterations sorted in datetime order
> 
>
> Key: SOLR-11237
> URL: https://issues.apache.org/jira/browse/SOLR-11237
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: master (8.0)
>Reporter: Jesse McLaughlin
>Priority: Trivial
> Attachments: SOLR-11237.patch, replication_tab.png
>
>
> The replication tab on the Solr Admin UI is not properly sorted by datetime 
> order.  The code appears to want to sort it, but it doesn't work.
> Attached a patch to fix this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11275) Adding diagrams for AutoAddReplica into Solr Ref Guide

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349466#comment-16349466
 ] 

Cassandra Targett commented on SOLR-11275:
--

The main reason why this has been stalled is the asciidoctor-ant project that 
we use to build the PDF promised to add asciidoctor-diagram in an upcoming 
release, so I've been waiting for that to come out since it will make it easier 
to integrate...If I get time soon I'll try to get it to work without that 
release since who knows when it's coming out.

> Adding diagrams for AutoAddReplica into Solr Ref Guide
> --
>
> Key: SOLR-11275
> URL: https://issues.apache.org/jira/browse/SOLR-11275
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Mano Kovacs
>Assignee: Cassandra Targett
>Priority: Major
> Attachments: autoaddreplica.png, autoaddreplica.puml, 
> autoaddreplica.puml, plantuml-diagram-test.png
>
>
> Pilot jira for adding PlantUML diagrams for documenting internals.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11283) Refactor all Stream Evaluators in solrj.io.eval to simplify them

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11283.
--
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

I think this just got missed to be resolved - it's listed in CHANGES for 7.1 
and was mentioned in the release notes. Please reopen if there's more to do.

> Refactor all Stream Evaluators in solrj.io.eval to simplify them
> 
>
> Key: SOLR-11283
> URL: https://issues.apache.org/jira/browse/SOLR-11283
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Dennis Gove
>Assignee: Dennis Gove
>Priority: Major
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-11283.patch, SOLR-11283.patch
>
>
> As Stream Evaluators have been evolving we are seeing a need to better handle 
> differing types of data within evaluators. For example, allowing some to 
> evaluate over individual values or arrays of values, like
> {code}
> sin(a)
> sin(a,b,c,d)
> sin([a,b,c,d])
> {code}
> The current structure of Evaluators makes this difficult and repetitive work. 
> Also, the hierarchy of classes behind evaluators can be confusing for 
> developers creating new evaluators. For example, when to use a 
> ComplexEvaluator vs a BooleanEvaluator.
> A full refactoring of these classes will greatly enhance the usability and 
> future evolution of evaluators.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1466 - Still unstable

2018-02-01 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1466/

14 tests failed.
FAILED:  
org.apache.lucene.codecs.simpletext.TestSimpleTextPostingsFormat.testDocsAndFreqsAndPositionsAndOffsetsAndPayloads

Error Message:
Captured an uncaught exception in thread: Thread[id=36, name=Thread-24, 
state=RUNNABLE, group=TGRP-TestSimpleTextPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=36, name=Thread-24, state=RUNNABLE, 
group=TGRP-TestSimpleTextPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.AssertionError: position is 
wrong expected:<0> but was:<-1>
at __randomizedtesting.SeedInfo.seed([305C644F49CC26AA]:0)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1208)
Caused by: java.lang.AssertionError: position is wrong expected:<0> but was:<-1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.RandomPostingsTester.verifyEnum(RandomPostingsTester.java:1099)
at 
org.apache.lucene.index.RandomPostingsTester.testTermsOneThread(RandomPostingsTester.java:1330)
at 
org.apache.lucene.index.RandomPostingsTester.access$100(RandomPostingsTester.java:65)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1206)


FAILED:  
org.apache.lucene.codecs.simpletext.TestSimpleTextPostingsFormat.testDocsAndFreqsAndPositionsAndOffsets

Error Message:
Captured an uncaught exception in thread: Thread[id=57, name=Thread-45, 
state=RUNNABLE, group=TGRP-TestSimpleTextPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=57, name=Thread-45, state=RUNNABLE, 
group=TGRP-TestSimpleTextPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.AssertionError: position is 
wrong expected:<2> but was:<-1>
at __randomizedtesting.SeedInfo.seed([305C644F49CC26AA]:0)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1208)
Caused by: java.lang.AssertionError: position is wrong expected:<2> but was:<-1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.RandomPostingsTester.verifyEnum(RandomPostingsTester.java:1099)
at 
org.apache.lucene.index.RandomPostingsTester.testTermsOneThread(RandomPostingsTester.java:1330)
at 
org.apache.lucene.index.RandomPostingsTester.access$100(RandomPostingsTester.java:65)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1206)


FAILED:  
org.apache.lucene.codecs.simpletext.TestSimpleTextPostingsFormat.testDocsAndFreqsAndPositions

Error Message:
Captured an uncaught exception in thread: Thread[id=75, name=Thread-63, 
state=RUNNABLE, group=TGRP-TestSimpleTextPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=75, name=Thread-63, state=RUNNABLE, 
group=TGRP-TestSimpleTextPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.AssertionError: position is 
wrong expected:<0> but was:<-1>
at __randomizedtesting.SeedInfo.seed([305C644F49CC26AA]:0)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1208)
Caused by: java.lang.AssertionError: position is wrong expected:<0> but was:<-1>
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.index.RandomPostingsTester.verifyEnum(RandomPostingsTester.java:1099)
at 
org.apache.lucene.index.RandomPostingsTester.testTermsOneThread(RandomPostingsTester.java:1330)
at 
org.apache.lucene.index.RandomPostingsTester.access$100(RandomPostingsTester.java:65)
at 
org.apache.lucene.index.RandomPostingsTester$TestThread.run(RandomPostingsTester.java:1206)


FAILED:  
org.apache.lucene.codecs.simpletext.TestSimpleTextPostingsFormat.testRandom

Error Message:
Captured an uncaught exception in thread: Thread[id=82, name=Thread-70, 
state=RUNNABLE, group=TGRP-TestSimpleTextPostingsFormat]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=82, name=Thread-70, state=RUNNABLE, 
group=TGRP-TestSimpleTextPostingsFormat]
Caused by: java.lang.RuntimeException: java.lang.AssertionError: position is 

[jira] [Resolved] (SOLR-11669) Policy Session lifecycle needs cleanup

2018-02-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11669.
---
   Resolution: Fixed
Fix Version/s: 7.3

> Policy Session lifecycle needs cleanup
> --
>
> Key: SOLR-11669
> URL: https://issues.apache.org/jira/browse/SOLR-11669
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.3
>
>
> {{Policy.Session}} is used in several places in the autoscaling framework. 
> However, its life-cycle is unclear and inconsistent - in some places 
> ({{ComputePlanAction}}) sessions are created directly via 
> {{Policy.createSession}}, in other places a thread-local caching in 
> {{PolicyHelper}} is used, in still other places the {{ObjectCache}} is used.
> {{Session}} creates a copy of live nodes on instantiation, which may become 
> stale, but {{PolicyHelper}} caching never checks whether this instance is 
> still valid while it's still in cache.
> Also, refcounting leads to awkward and non-intuitive code, such 
> AddReplicaCmd:103.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11500) MOVEREPLICA api missing in V2

2018-02-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-11500:
-

Assignee: Noble Paul

> MOVEREPLICA api missing in V2
> -
>
> Key: SOLR-11500
> URL: https://issues.apache.org/jira/browse/SOLR-11500
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Trivial
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11669) Policy Session lifecycle needs cleanup

2018-02-01 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11669:
--
Issue Type: Improvement  (was: Bug)

> Policy Session lifecycle needs cleanup
> --
>
> Key: SOLR-11669
> URL: https://issues.apache.org/jira/browse/SOLR-11669
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.3
>
>
> {{Policy.Session}} is used in several places in the autoscaling framework. 
> However, its life-cycle is unclear and inconsistent - in some places 
> ({{ComputePlanAction}}) sessions are created directly via 
> {{Policy.createSession}}, in other places a thread-local caching in 
> {{PolicyHelper}} is used, in still other places the {{ObjectCache}} is used.
> {{Session}} creates a copy of live nodes on instantiation, which may become 
> stale, but {{PolicyHelper}} caching never checks whether this instance is 
> still valid while it's still in cache.
> Also, refcounting leads to awkward and non-intuitive code, such 
> AddReplicaCmd:103.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11360) Move field attribute docs to the ref guide

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11360:
-
Component/s: Schema and Analysis
 documentation

> Move field attribute docs to the ref guide
> --
>
> Key: SOLR-11360
> URL: https://issues.apache.org/jira/browse/SOLR-11360
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, Schema and Analysis
>Reporter: Varun Thacker
>Priority: Trivial
> Attachments: SOLR-11360.patch
>
>
> The default schema has some docs about valid field attributes. 
> Let's point users to the ref guide instead. 
> Makes the schema file ~30 lines smaller and we point more users to the ref 
> guide



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11379) Config API to switch on/off lucene's logging infoStream

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11379:
-
Component/s: config-api

> Config API to switch on/off lucene's logging infoStream 
> 
>
> Key: SOLR-11379
> URL: https://issues.apache.org/jira/browse/SOLR-11379
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Reporter: Amrit Sarkar
>Priority: Minor
>
> To enable infoStream logging into solr, you need to edit solrconfig.xml and 
> reload core;
> We intend to introduce config api to enable/disable infostream logging in 
> near future.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11410) A ref guide page on running solr on docker

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11410:
-
Component/s: documentation

> A ref guide page on running solr on docker
> --
>
> Key: SOLR-11410
> URL: https://issues.apache.org/jira/browse/SOLR-11410
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Varun Thacker
>Priority: Major
>
> A dedicated ref guide page on run solr on docker
> At the end we could even link to 
> http://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1277 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1277/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseSerialGC

6 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([90E24B78D3993F3B:18B674A27D6552C3]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCollectionsAPI

Error Message:
Error from server at 
https://127.0.0.1:40809/solr/awhollynewcollection_0_shard1_replica_n1: 
ClusterState says we are the leader 
(https://127.0.0.1:40809/solr/awhollynewcollection_0_shard1_replica_n1), but 
locally we don't think 

[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2018-02-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349423#comment-16349423
 ] 

ASF subversion and git services commented on SOLR-11613:


Commit 2b985784d963d1a216f64c00afb2cf41b93514e5 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2b98578 ]

SOLR-11613: Make message for missing dataimport config in UI more explicit


> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11613.
--
   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

Thanks Amrit!

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2018-02-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349422#comment-16349422
 ] 

ASF subversion and git services commented on SOLR-11613:


Commit 98a0b837141c465fbb4f01273fa05ea2873afc3b in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=98a0b83 ]

SOLR-11613: Make message for missing dataimport config in UI more explicit


> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11717) Document Streaming Examples using SolrJ

2018-02-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349408#comment-16349408
 ] 

Joel Bernstein edited comment on SOLR-11717 at 2/1/18 10:33 PM:


Let's also create a specific Solrj client for Streaming Expressions. You can 
use SolrStream but that was really meant to be a low level Stream not a user 
facing Stream. We could call it StreamClient or something like that. Sample 
syntax:
{code:java}
StreamClient client = new StreamClient(zkUrl, collection, params) ;
client.open();
while(true){
   Tuple t = client.read();     
}
client.close();

 {code}
 


was (Author: joel.bernstein):
Let's also create a specific Solrj client for Streaming Expressions. You can 
use SolrStream but that was really meant to be a low level Stream not a user 
facing Stream. We could call it StreamClient or something like that. Sample 
syntax:
{code:java}
StreamClient client = new StreamClient(zkUrl, params) ;
client.open();
while(true){
   Tuple t = client.read();     
}
client.close();

 {code}
 

> Document Streaming Examples using SolrJ
> ---
>
> Key: SOLR-11717
> URL: https://issues.apache.org/jira/browse/SOLR-11717
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Varun Thacker
>Priority: Major
>
> Under http://lucene.apache.org/solr/guide/streaming-expressions.html it would 
> be nice if we had a page dedicated to how to use streaming with SolrJ
> Topics we could cover
> - How to correctly create a stream expression : 
> https://issues.apache.org/jira/browse/SOLR-11600?focusedCommentId=16261446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16261446
> - What is a SolrClientCache 
> - Which Stream objects should be a singleton and reused 
> - How many http connections / connections_per_host are we limited with by 
> default
> - Example where the the Solr query node becomes the co-ordinator for the 
> streaming expressions ( {{TupleStream solrStream = new 
> SolrStream(solr_query_node, paramsLoc);}}
> - How to use an "empty" Solr node to fire queries against. This node should 
> not have any collections. We could reduce the jetty thread pool size to rate 
> limit number of parallel queries potentially? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349407#comment-16349407
 ] 

Cassandra Targett commented on SOLR-11613:
--

+1, I'll commit this simple change. If we're being strict about terminology, I 
think we should change it from "dataimport handler" to "DataImportHandler" 
since that's what it's called in other parts of Solr and in docs. I'll do that 
before I commit.

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11717) Document Streaming Examples using SolrJ

2018-02-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349408#comment-16349408
 ] 

Joel Bernstein commented on SOLR-11717:
---

Let's also create a specific Solrj client for Streaming Expressions. You can 
use SolrStream but that was really meant to be a low level Stream not a user 
facing Stream. We could call it StreamClient or something like that. Sample 
syntax:
{code:java}
StreamClient client = new StreamClient(zkUrl, params) ;
client.open();
while(true){
   Tuple t = client.read();     
}
client.close();

 {code}
 

> Document Streaming Examples using SolrJ
> ---
>
> Key: SOLR-11717
> URL: https://issues.apache.org/jira/browse/SOLR-11717
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Varun Thacker
>Priority: Major
>
> Under http://lucene.apache.org/solr/guide/streaming-expressions.html it would 
> be nice if we had a page dedicated to how to use streaming with SolrJ
> Topics we could cover
> - How to correctly create a stream expression : 
> https://issues.apache.org/jira/browse/SOLR-11600?focusedCommentId=16261446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16261446
> - What is a SolrClientCache 
> - Which Stream objects should be a singleton and reused 
> - How many http connections / connections_per_host are we limited with by 
> default
> - Example where the the Solr query node becomes the co-ordinator for the 
> streaming expressions ( {{TupleStream solrStream = new 
> SolrStream(solr_query_node, paramsLoc);}}
> - How to use an "empty" Solr node to fire queries against. This node should 
> not have any collections. We could reduce the jetty thread pool size to rate 
> limit number of parallel queries potentially? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11613:
-
Component/s: Admin UI

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11387) testPoissonDistribution fails too frequently

2018-02-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11387.
---
Resolution: Resolved

> testPoissonDistribution fails too frequently
> 
>
> Key: SOLR-11387
> URL: https://issues.apache.org/jira/browse/SOLR-11387
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
>
> Currently the test case for the Poisson distribution is failing to 
> frequently. Part of the test relies on the mean and standard deviation being 
> within a specific range. But the random sampling done in the test often falls 
> outside of this range. I'll first increase the range and see if that 
> stabilizes the test.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11344) Fix copyOfRange Stream Evaluator so the end index parameter can be equal to the length array

2018-02-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11344.
---
Resolution: Resolved

> Fix copyOfRange Stream Evaluator so the end index parameter can be equal to 
> the length array
> 
>
> Key: SOLR-11344
> URL: https://issues.apache.org/jira/browse/SOLR-11344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11344.patch
>
>
> This was broken during the refactoring of the Stream Evaluators. The issue is 
> that bounds checking was added which requires the end index to be less than 
> the size of the array being copied. But since the copy is exclusive of the 
> end index it needs to be the size of the array being copied or the last value 
> in the array won't be copied.
> A test case will be added to cover this scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11344) Fix copyOfRange Stream Evaluator so the end index parameter can be equal to the length array

2018-02-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349379#comment-16349379
 ] 

Joel Bernstein commented on SOLR-11344:
---

I believe this is resolved.

> Fix copyOfRange Stream Evaluator so the end index parameter can be equal to 
> the length array
> 
>
> Key: SOLR-11344
> URL: https://issues.apache.org/jira/browse/SOLR-11344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11344.patch
>
>
> This was broken during the refactoring of the Stream Evaluators. The issue is 
> that bounds checking was added which requires the end index to be less than 
> the size of the array being copied. But since the copy is exclusive of the 
> end index it needs to be the size of the array being copied or the last value 
> in the array won't be copied.
> A test case will be added to cover this scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11203) Simple regression output should include R-square

2018-02-01 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein resolved SOLR-11203.
---
Resolution: Resolved

> Simple regression output should include R-square
> 
>
> Key: SOLR-11203
> URL: https://issues.apache.org/jira/browse/SOLR-11203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11203.patch
>
>
> In the initial commit of the RegressionEvaluator the regression result 
> contains R which is the Pearson correlation for the data sets. But it does 
> not return R-square. This ticket adds RSquare to the regression result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11203) Simple regression output should include R-square

2018-02-01 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349375#comment-16349375
 ] 

Joel Bernstein commented on SOLR-11203:
---

Yes

> Simple regression output should include R-square
> 
>
> Key: SOLR-11203
> URL: https://issues.apache.org/jira/browse/SOLR-11203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11203.patch
>
>
> In the initial commit of the RegressionEvaluator the regression result 
> contains R which is the Pearson correlation for the data sets. But it does 
> not return R-square. This ticket adds RSquare to the regression result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11651) Solr Resources web page should highlight the ref guide section

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11651.
--
Resolution: Fixed

I moved the Solr Ref Guide section to the top of the Documentation section. I 
also reworked what was "Release Documentation" to make it clear it's about the 
Javadocs. The bulk of that section was inaccurate (it said javadocs were in the 
release artifacts, when they aren't), so I mostly removed it.

> Solr Resources web page should highlight the ref guide section
> --
>
> Key: SOLR-11651
> URL: https://issues.apache.org/jira/browse/SOLR-11651
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Reporter: Varun Thacker
>Priority: Minor
>
> Under the Documentation section, we have a very small description of what the 
> ref guide is . We also list the Javadocs before this ( which is very sparse )
> Let's highlight the ref guide which we put a lot of effort in maintaining and 
> encourage users  to click on that over the Javadocs
> Resources Page : http://lucene.apache.org/solr/resources.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11651) Solr Resources web page should highlight the ref guide section

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11651:
-
Component/s: website

> Solr Resources web page should highlight the ref guide section
> --
>
> Key: SOLR-11651
> URL: https://issues.apache.org/jira/browse/SOLR-11651
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: website
>Reporter: Varun Thacker
>Priority: Minor
>
> Under the Documentation section, we have a very small description of what the 
> ref guide is . We also list the Javadocs before this ( which is very sparse )
> Let's highlight the ref guide which we put a lot of effort in maintaining and 
> encourage users  to click on that over the Javadocs
> Resources Page : http://lucene.apache.org/solr/resources.html



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11695) JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" features

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11695?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11695:
-
Component/s: Facet Module

> JSON FacetModule needs equivilents for StatsComponent's "count" and "missing" 
> features
> --
>
> Key: SOLR-11695
> URL: https://issues.apache.org/jira/browse/SOLR-11695
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
>
> StatsComponent supports stats named "count" and "missing":
> * count: for the set of documents we're computing stats over, "how many 
> _non-distinct_ values exist in those documents in the specified field?" (or 
> in the case of an arbitrary function: "in how many of these documents does 
> true==ValueSource.exist()" ?)
> ** no to be confused with the number of _unique_ values (aprox "cardinality" 
> or exact "countDistinct")
> * missing: for the set of documents we're computing stats over, "how many of 
> those documents do not have any value in the specified field?" (or in the 
> case of an arbitrary function: "in how many of thse documents does 
> false==ValueSource.exist()" ?)
> (NOTE: for a single valued field, these are essentially inveses of each 
> other, but for multivalued fields "count" actaully returns the total number 
> of "value instances" not just the number of docs that have at least one value)
> AFAICT there is no equivalent functionality supported by the JSON 
> FacetModule, which will be a blocker preventing some users from migrating 
> from using stats.field (or facet.pivot+stats.field) to json.facet.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11686) Make collection creation logging less verbose

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11686:
-
Component/s: SolrCloud

> Make collection creation logging less verbose
> -
>
> Key: SOLR-11686
> URL: https://issues.apache.org/jira/browse/SOLR-11686
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Major
>
> Here's roughly what get's logged :
> {code}
> INFO  - 2017-11-27 21:55:09.685; [   ] 
> org.apache.solr.handler.admin.CollectionsHandler; Invoked Collection Action 
> :create with params 
> replicationFactor=1=-1=test=test=CREATE=1=json
>  and sendToOCPQueue=true
> INFO  - 2017-11-27 21:55:09.719; [   ] 
> org.apache.solr.cloud.CreateCollectionCmd; Create collection test
> INFO  - 2017-11-27 21:55:09.928; [   ] 
> org.apache.solr.cloud.overseer.SliceMutator; createReplica() {
>   "operation":"ADDREPLICA",
>   "collection":"test",
>   "shard":"shard1",
>   "core":"test_shard1_replica_n1",
>   "state":"down",
>   "base_url":"http://172.16.0.83:8983/solr;,
>   "type":"NRT"} 
> INFO  - 2017-11-27 21:55:10.187; [   ] 
> org.apache.solr.handler.admin.CoreAdminOperation; core create command 
> qt=/admin/cores=core_node2=test=true=test_shard1_replica_n1=CREATE=1=test=shard1=javabin=2=NRT
> INFO  - 2017-11-27 21:55:10.190; [   ] 
> org.apache.solr.core.TransientSolrCoreCacheDefault; Allocating transient 
> cache for 2147483647 transient cores
> INFO  - 2017-11-27 21:55:10.357; [   ] 
> org.apache.solr.common.cloud.ZkStateReader$StateWatcher; A cluster state 
> change: [WatchedEvent state:SyncConnected type:NodeDataChanged 
> path:/collections/test/state.json] for collection [test] has occurred - 
> updating... (live nodes size: [1])
> INFO  - 2017-11-27 21:55:10.357; [   ] 
> org.apache.solr.common.cloud.ZkStateReader$StateWatcher; A cluster state 
> change: [WatchedEvent state:SyncConnected type:NodeDataChanged 
> path:/collections/test/state.json] for collection [test] has occurred - 
> updating... (live nodes size: [1])
> INFO  - 2017-11-27 21:55:11.256; [   ] org.apache.solr.core.RequestParams; 
> conf resource params.json loaded . version : 0 
> INFO  - 2017-11-27 21:55:11.257; [   ] org.apache.solr.core.RequestParams; 
> request params refreshed to version 0
> INFO  - 2017-11-27 21:55:11.272; [   ] 
> org.apache.solr.core.SolrResourceLoader; [test_shard1_replica_n1] Added 53 
> libs to classloader, from paths: 
> [/Users/varunthacker/solr-7.1.0/contrib/clustering/lib, 
> /Users/varunthacker/solr-7.1.0/contrib/extraction/lib, 
> /Users/varunthacker/solr-7.1.0/contrib/langid/lib, 
> /Users/varunthacker/solr-7.1.0/contrib/velocity/lib, 
> /Users/varunthacker/solr-7.1.0/dist]
> INFO  - 2017-11-27 21:55:11.479; [   ] org.apache.solr.core.SolrConfig; Using 
> Lucene MatchVersion: 7.1.0
> INFO  - 2017-11-27 21:55:11.616; [   ] org.apache.solr.schema.IndexSchema; 
> [test_shard1_replica_n1] Schema name=default-config
> INFO  - 2017-11-27 21:55:12.161; [   ] org.apache.solr.schema.IndexSchema; 
> Loaded schema default-config/1.6 with uniqueid field id
> INFO  - 2017-11-27 21:55:12.206; [   ] org.apache.solr.core.CoreContainer; 
> Creating SolrCore 'test_shard1_replica_n1' using configuration from 
> collection test, trusted=true
> INFO  - 2017-11-27 21:55:12.230; [   ] org.apache.solr.core.SolrCore; 
> solr.RecoveryStrategy.Builder
> INFO  - 2017-11-27 21:55:12.235; [   ] org.apache.solr.core.SolrCore; 
> [[test_shard1_replica_n1] ] Opening new SolrCore at 
> [/Users/varunthacker/solr-7.1.0/example/cloud/node1/solr/test_shard1_replica_n1],
>  
> dataDir=[/Users/varunthacker/solr-7.1.0/example/cloud/node1/solr/test_shard1_replica_n1/data/]
> INFO  - 2017-11-27 21:55:12.338; [   ] 
> org.apache.solr.response.XSLTResponseWriter; xsltCacheLifetimeSeconds=5
> INFO  - 2017-11-27 21:55:12.641; [   ] org.apache.solr.update.UpdateHandler; 
> Using UpdateLog implementation: org.apache.solr.update.UpdateLog
> INFO  - 2017-11-27 21:55:12.641; [   ] org.apache.solr.update.UpdateLog; 
> Initializing UpdateLog: dataDir= defaultSyncLevel=FLUSH numRecordsToKeep=100 
> maxNumLogsToKeep=10 numVersionBuckets=65536
> INFO  - 2017-11-27 21:55:12.649; [   ] org.apache.solr.update.CommitTracker; 
> Hard AutoCommit: if uncommited for 15000ms; 
> INFO  - 2017-11-27 21:55:12.649; [   ] org.apache.solr.update.CommitTracker; 
> Soft AutoCommit: disabled
> INFO  - 2017-11-27 21:55:12.668; [   ] 
> org.apache.solr.search.SolrIndexSearcher; Opening 
> [Searcher@70fa64c2[test_shard1_replica_n1] main]
> INFO  - 2017-11-27 21:55:12.681; [   ] 
> org.apache.solr.rest.ManagedResourceStorage$ZooKeeperStorageIO; Configured 
> ZooKeeperStorageIO with 

[jira] [Updated] (SOLR-11709) JSON "Stats" Facets should support directly specifying a domain change (for filters/blockjoin/etc...)

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11709:
-
Component/s: Facet Module

> JSON "Stats" Facets should support directly specifying a domain change (for 
> filters/blockjoin/etc...)
> -
>
> Key: SOLR-11709
> URL: https://issues.apache.org/jira/browse/SOLR-11709
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
>
> AFAICT, the simple string syntax of JSON Facet Modules "statistic facets" 
> (ex: {{foo:"min(fieldA)"}} ) means there is no way to request a statistic 
> with a domain change applied -- stats are always computed relative to it's 
> immediate parent (ie: the baseset matching the {{q}} for a top level stat, or 
> the constrained set if a stat is a subfacet of something else)
> This means that things like the simple "fq exclusion" in StatsComponent have 
> no straight forward equivalent in JSON faceting. 
> The work around appears to be to use a {{type:"query", q:"*:*, domain:...}} 
> parent and specify the stats you are interested in as sub-facets...
> {code}
> $ curl 'http://localhost:8983/solr/techproducts/query' -d 
> 'q=*:*=true={!tag=boo}id:hoss=true={!max=true 
> ex=boo}popularity=0={
> bar: { type:"query", q:"*:*", domain:{excludeTags:boo}, facet: { 
> foo:"max(popularity)" } } }'
> {
>   "response":{"numFound":0,"start":0,"docs":[]
>   },
>   "facets":{
> "count":0,
> "bar":{
>   "count":32,
>   "foo":10}},
>   "stats":{
> "stats_fields":{
>   "popularity":{
> "max":10.0
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11717) Document Streaming Examples using SolrJ

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11717:
-
Component/s: streaming expressions
 documentation

> Document Streaming Examples using SolrJ
> ---
>
> Key: SOLR-11717
> URL: https://issues.apache.org/jira/browse/SOLR-11717
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation, streaming expressions
>Reporter: Varun Thacker
>Priority: Major
>
> Under http://lucene.apache.org/solr/guide/streaming-expressions.html it would 
> be nice if we had a page dedicated to how to use streaming with SolrJ
> Topics we could cover
> - How to correctly create a stream expression : 
> https://issues.apache.org/jira/browse/SOLR-11600?focusedCommentId=16261446=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16261446
> - What is a SolrClientCache 
> - Which Stream objects should be a singleton and reused 
> - How many http connections / connections_per_host are we limited with by 
> default
> - Example where the the Solr query node becomes the co-ordinator for the 
> streaming expressions ( {{TupleStream solrStream = new 
> SolrStream(solr_query_node, paramsLoc);}}
> - How to use an "empty" Solr node to fire queries against. This node should 
> not have any collections. We could reduce the jetty thread pool size to rate 
> limit number of parallel queries potentially? 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4419 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4419/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([BD0380BE22963D59:3557BF648C6A50A1]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:915)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-11820) Provide a configurable hook to filter config API calls

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11820:
-
Component/s: config-api

> Provide a configurable hook to filter config API calls
> --
>
> Key: SOLR-11820
> URL: https://issues.apache.org/jira/browse/SOLR-11820
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api
>Reporter: Anshum Gupta
>Priority: Major
>
> It would be good to have a mechanism to hook in custom 
> whitelisting/blacklisting logic for config updates via the API.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11848) User guide update on working with XML updates

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11848.
--
   Resolution: Fixed
Fix Version/s: 7.3

Thanks [~dwoj...@gmail.com]! If your PR doesn't close automatically, would you 
go in and close it?

> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.3
>
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #303: Guide update: Grouping XML commands and curl ...

2018-02-01 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/303


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11848) User guide update on working with XML updates

2018-02-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349297#comment-16349297
 ] 

ASF subversion and git services commented on SOLR-11848:


Commit a4320aab8daac8994e78a28fdab4c4db6145d15a in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4320aa ]

SOLR-11848: Ref Guide: include info on grouping operations and using curl for 
large files. This closes #303.


> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Assignee: Cassandra Targett
>Priority: Minor
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11848) User guide update on working with XML updates

2018-02-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349296#comment-16349296
 ] 

ASF subversion and git services commented on SOLR-11848:


Commit abedf4170e54abc72efb099c75a2f40829e78e6e in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=abedf41 ]

SOLR-11848: Ref Guide: include info on grouping operations and using curl for 
large files. This closes #303.


> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Assignee: Cassandra Targett
>Priority: Minor
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11848) User guide update on working with XML updates

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett reassigned SOLR-11848:


Assignee: Cassandra Targett

> User guide update on working with XML updates
> -
>
> Key: SOLR-11848
> URL: https://issues.apache.org/jira/browse/SOLR-11848
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Affects Versions: 7.2
>Reporter: Dariusz Wojtas
>Assignee: Cassandra Targett
>Priority: Minor
>
> I have added some additional information to the user guide in repository:
>   https://github.com/apache/lucene-solr/pull/303
> this covers info about:
> # grouping XML commands with the  element
> # more effective way of working with curl for large files - the original 
> approach described in the guide with _--data-binary_ results in curl out of 
> memory errors for large files being uploaded



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11017) Add support for unique metric to facet streaming function

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11017:
-
Issue Type: Improvement  (was: Bug)

> Add support for unique metric to facet streaming function
> -
>
> Key: SOLR-11017
> URL: https://issues.apache.org/jira/browse/SOLR-11017
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.6
>Reporter: Susheel Kumar
>Priority: Major
>  Labels: patch, stream, streaming, streaming-api
>
> Add support for Unique metric to facet function which under the cover 
> utilizes JSON Facet API.
> The challenge is to come up with a keyword which can be used for 
> UniqueMetric. Currently "unique" is used for UniqueStream and can't be 
> utilized.  
> Does "uniq" make sense? 
> ...
> ...
>   StreamFactory factory = new StreamFactory()
> .withCollectionZkHost (...) 
>.withFunctionName("facet", FacetStream.class)
> .withFunctionName("sum", SumMetric.class)
> .withFunctionName("unique", UniqueStream.class)
> .withFunctionName("unique", UniqueMetric.class)
> ...
> ...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr issue #317: LUCENE-8145: OffsetsEnum is now unitary

2018-02-01 Thread dsmiley
Github user dsmiley commented on the issue:

https://github.com/apache/lucene-solr/pull/317
  
Totally agree on morphing PassageScorer as you describe; I was thinking the 
same thing.  Perhaps a separate issue?  I'm not sure how back-compat we should 
be for that... it's all @lucene.experimental after all but still.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11124) MoveReplicaCmd should skip deleting old replica in case of its node is not live

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11124:
-
Component/s: SolrCloud

> MoveReplicaCmd should skip deleting old replica in case of its node is not 
> live
> ---
>
> Key: SOLR-11124
> URL: https://issues.apache.org/jira/browse/SOLR-11124
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.1
>
> Attachments: SOLR-11124.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11775) json.facet can use inconsistent Long/Integer for "count" depending on shard count

2018-02-01 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349265#comment-16349265
 ] 

Yonik Seeley commented on SOLR-11775:
-

bq. It doesn't feel fair to pass the buck and say "client code should not rely 
on getting what we're giving you"

I was just pointing out that we already do this in other places - it's not 
unique to json facets.
The parts of Solr that consume code like this typically use 
((Number)value).longValue()

I'm not opposed to changing the behavior, but I disagree that this is a bug.

> json.facet can use inconsistent Long/Integer for "count" depending on shard 
> count
> -
>
> Key: SOLR-11775
> URL: https://issues.apache.org/jira/browse/SOLR-11775
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Facet Module
>Reporter: Hoss Man
>Priority: Major
>
> (NOTE: I noticed this while working on a test for {{type: range}} but it's 
> possible other facet types may be affected as well)
> When dealing with a single core request -- either standalone or a collection 
> with only one shard -- json.facet seems to use "Integer" objects to return 
> the "count" of facet buckets, however if the shard count is increased then 
> the end client gets a "Long" object for the "count"
> (This isn't noticable when using {{wt=json}} but can be very problematic when 
> trying to write client code using {{wt=xml}} or SolrJ



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11124) MoveReplicaCmd should skip deleting old replica in case of its node is not live

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11124.
--
   Resolution: Fixed
Fix Version/s: 7.1

Resolving this since I found in CHANGES for 7.1 so assume it's complete.

> MoveReplicaCmd should skip deleting old replica in case of its node is not 
> live
> ---
>
> Key: SOLR-11124
> URL: https://issues.apache.org/jira/browse/SOLR-11124
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: 7.1
>
> Attachments: SOLR-11124.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11137) LTR: misleading error message when loading a model

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11137:
-
Component/s: contrib - LTR

> LTR: misleading error message when loading a model
> --
>
> Key: SOLR-11137
> URL: https://issues.apache.org/jira/browse/SOLR-11137
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Reporter: Diego Ceccarelli
>Priority: Minor
>
> Loading a model can fail for several reasons when calling the model 
> constructor, but the error message always reports that the Model type does 
> not exist.
> https://github.com/apache/lucene-solr/blob/master/solr/contrib/ltr/src/java/org/apache/solr/ltr/model/LTRScoringModel.java#L103



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11139) FreeTextSuggester exits on first error while building suggester index

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11139:
-
Priority: Major  (was: Blocker)

> FreeTextSuggester exits on first error while building suggester index
> -
>
> Key: SOLR-11139
> URL: https://issues.apache.org/jira/browse/SOLR-11139
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Suggester
>Affects Versions: 5.5.4
>Reporter: Angel Todorov
>Priority: Major
>
> If an exception is thrown below, it will exit the while loop, therefore 
> stopping building of the suggester index. In fact, it will not build 
> anything, because the exception is not handled in any way. This is highly 
> unacceptable, because in an enterprise environment there will always be data 
> that does not conform to the expected rules. 
>   while (true) {
> BytesRef term = termsEnum.next();
> if (term == null) {
>   break;
> }
> int ngramCount = countGrams(term);
> if (ngramCount > grams) {
>   throw new IllegalArgumentException("tokens must not contain 
> separator byte; got token=" + term + " but gramCount=" + ngramCount + ", 
> which is greater than expected max ngram size=" + grams);
> }
> if (ngramCount == 1) {
>   totTokens += termsEnum.totalTermFreq();
> }
> builder.add(Util.toIntsRef(term, scratchInts), 
> encodeWeight(termsEnum.totalTermFreq()));
>   }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_144) - Build # 21378 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21378/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseSerialGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest: 1) Thread[id=1053, 
name=qtp2115051602-1053, state=TIMED_WAITING, group=TGRP-CloudSolrClientTest]   
  at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.impl.CloudSolrClientTest: 
   1) Thread[id=1053, name=qtp2115051602-1053, state=TIMED_WAITING, 
group=TGRP-CloudSolrClientTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([51F421AB33ED2C1E]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=1053, name=qtp2115051602-1053, state=TIMED_WAITING, 
group=TGRP-CloudSolrClientTest] at sun.misc.Unsafe.park(Native Method)  
   at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=1053, name=qtp2115051602-1053, state=TIMED_WAITING, 
group=TGRP-CloudSolrClientTest]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2163)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([51F421AB33ED2C1E]:0)


FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:
no replica should be present in  127.0.0.1:41265_solr

Stack Trace:
java.lang.AssertionError: no replica should be present in  127.0.0.1:41265_solr
at 
__randomizedtesting.SeedInfo.seed([B5AD3F2801E32AB2:3DF900F2AF1F474A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.apache.solr.cloud.TestUtilizeNode.test(TestUtilizeNode.java:99)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 

[jira] [Resolved] (SOLR-11156) Edismax query parser fails to parse q=(*:*)

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11156.
--
Resolution: Cannot Reproduce

I just tried to reproduce this with 7.2 and 6.4.2 and in both cases I see:
 
{code:json}
"debug": {
"rawquerystring": "*:*",
"querystring": "*:*",
"parsedquery": "(+MatchAllDocsQuery(*:*))/no_coord",
"parsedquery_toString": "+*:*",
"QParser": "ExtendedDismaxQParser",
...
{code}

I think unless we get some additional information, this isn't a bug.

> Edismax query parser fails to parse q=(*:*)
> ---
>
> Key: SOLR-11156
> URL: https://issues.apache.org/jira/browse/SOLR-11156
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 5.5.1
> Environment: Solr 5.5.1
>Reporter: Alfonso Noriega Meneses
>Priority: Major
>  Labels: edismax, query-parser, solr
>
> The Edismax query parser does parse the query {{(\*:\*)}} as a exact text 
> match instead of a match all documents.
> Setting the debugQuery param to true and the query  {{\*:\*}}  we get this 
> json:
> {code}
> "debug": {
>"rawquerystring": "*:*",
>"querystring": "*:*",
>"parsedquery": "(+MatchAllDocsQuery(*:*))/no_coord",
>"parsedquery_toString": "+*:*",
>...
> }
> {code}
> But with the query {{(\*:\*)}} the Edismax query parser returns a 
> {{DisjunctionMaxQuery}} like shown in the following json:
> {code}
> "debug": {
>"rawquerystring": "(*:*)",
>"querystring": "(*:*)",
>"parsedquery": "(+DisjunctionMaxQuery((text:*\\:*)))/no_coord",
>"parsedquery_toString": "+(text:*\\:*)",
>"explain": {},
>"QParser": "ExtendedDismaxQParser",
>...
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11937) stricter -Xlint for contrib/ltr (code and tests)

2018-02-01 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11937:
---
Description: Attached patch illustrates 'status quo' vs. 'goal' and cleanup 
could be done incrementally and/or by multiple people.

> stricter -Xlint for contrib/ltr (code and tests)
> 
>
> Key: SOLR-11937
> URL: https://issues.apache.org/jira/browse/SOLR-11937
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11937.patch
>
>
> Attached patch illustrates 'status quo' vs. 'goal' and cleanup could be done 
> incrementally and/or by multiple people.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11937) stricter -Xlint for contrib/ltr (code and tests)

2018-02-01 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11937?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11937:
---
Attachment: SOLR-11937.patch

> stricter -Xlint for contrib/ltr (code and tests)
> 
>
> Key: SOLR-11937
> URL: https://issues.apache.org/jira/browse/SOLR-11937
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11937.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11937) stricter -Xlint for contrib/ltr (code and tests)

2018-02-01 Thread Christine Poerschke (JIRA)
Christine Poerschke created SOLR-11937:
--

 Summary: stricter -Xlint for contrib/ltr (code and tests)
 Key: SOLR-11937
 URL: https://issues.apache.org/jira/browse/SOLR-11937
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Christine Poerschke






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11203) Simple regression output should include R-square

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349217#comment-16349217
 ] 

Cassandra Targett commented on SOLR-11203:
--

[~joel.bernstein], is this one that could be resolved?

> Simple regression output should include R-square
> 
>
> Key: SOLR-11203
> URL: https://issues.apache.org/jira/browse/SOLR-11203
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11203.patch
>
>
> In the initial commit of the RegressionEvaluator the regression result 
> contains R which is the Pearson correlation for the data sets. But it does 
> not return R-square. This ticket adds RSquare to the regression result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11217) Mathematical notation not supported in Solr Ref Guide

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett resolved SOLR-11217.
--
Resolution: Unresolved

I'm going to resolve this since there's been no change to the state of things 
in the past several months. I think there will be a day when it's easier, but 
we'll file a new issue if that day arrives.

> Mathematical notation not supported in Solr Ref Guide
> -
>
> Key: SOLR-11217
> URL: https://issues.apache.org/jira/browse/SOLR-11217
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Houston Putman
>Priority: Minor
>
> The template used to build the Solr Ref Guide from the asciidoctor pages 
> removes the needed javascript for mathematical notation. 
> When building the webpage, asciidoctor puts a tag like the one below at the 
> bottom of the html
> {code:html}
>  src="#{cdn_base}/mathjax/2.6.0/MathJax.js?config=TeX-MML-AM_HTMLorMML">
> {code}
> and some other tags as well.
> However these are not included in the sections that are inserted into the 
> template, so they are left out and the mathematical notation is not converted 
> to MathJax that can be viewed in a browser.
> This can be tested by adding any stem notation in an asciidoctor 
> solr-ref-page, such as the following text:
> {code}
> asciimath:[sqrt(4) = 2].
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11266) V2 API returning wrong content-type

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11266:
-
Component/s: v2 API

> V2 API returning wrong content-type
> ---
>
> Key: SOLR-11266
> URL: https://issues.apache.org/jira/browse/SOLR-11266
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: v2 API
>Reporter: Ishan Chattopadhyaya
>Priority: Major
>
> The content-type of the returned value is wrong in many places. It should 
> return "application/json", but instead returns "application/text-plan".
> Here's an example:
> {code}
> [ishan@t430 ~] $ curl -v 
> "http://localhost:8983/api/collections/products/select?q=*:*=0;
> *   Trying 127.0.0.1...
> * TCP_NODELAY set
> * Connected to localhost (127.0.0.1) port 8983 (#0)
> > GET /api/collections/products/select?q=*:*=0 HTTP/1.1
> > Host: localhost:8983
> > User-Agent: curl/7.51.0
> > Accept: */*
> > 
> < HTTP/1.1 200 OK
> < Content-Type: text/plain;charset=utf-8
> < Content-Length: 184
> < 
> {
>   "responseHeader":{
> "zkConnected":true,
> "status":0,
> "QTime":1,
> "params":{
>   "q":"*:*",
>   "rows":"0"}},
>   "response":{"numFound":260,"start":0,"docs":[]
>   }}
> * Curl_http_done: called premature == 0
> * Connection #0 to host localhost left intact
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 433 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/433/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestXPathEntityProcessor

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001\dih-properties-008:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001\dih-properties-008

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001\dih-properties-008:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001\dih-properties-008
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestXPathEntityProcessor_85672A1F806B2647-001

at __randomizedtesting.SeedInfo.seed([85672A1F806B2647]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.ReplaceNodeNoTargetTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([ADBC9BE1D6D57877:25E8A43B7829158F]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.ReplaceNodeNoTargetTest.test(ReplaceNodeNoTargetTest.java:92)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1276 - Still Unstable!

2018-02-01 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1276/
Java: 64bit/jdk-9.0.1 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrClient

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.TestLBHttpSolrClient: 1) Thread[id=257, 
name=qtp666474163-257, state=TIMED_WAITING, group=TGRP-TestLBHttpSolrClient]
 at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.TestLBHttpSolrClient: 
   1) Thread[id=257, name=qtp666474163-257, state=TIMED_WAITING, 
group=TGRP-TestLBHttpSolrClient]
at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([BCE90B92ED750BB9]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.TestLBHttpSolrClient

Error Message:
There are still zombie threads that couldn't be terminated:1) 
Thread[id=257, name=qtp666474163-257, state=TIMED_WAITING, 
group=TGRP-TestLBHttpSolrClient] at 
java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
 at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
 at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
 at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
 at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie 
threads that couldn't be terminated:
   1) Thread[id=257, name=qtp666474163-257, state=TIMED_WAITING, 
group=TGRP-TestLBHttpSolrClient]
at java.base@9.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@9.0.1/java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:234)
at 
java.base@9.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2192)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.reservedWait(ReservedThreadExecutor.java:308)
at 
app//org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:373)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708)
at 
app//org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626)
at java.base@9.0.1/java.lang.Thread.run(Thread.java:844)
at __randomizedtesting.SeedInfo.seed([BCE90B92ED750BB9]:0)


FAILED:  org.apache.solr.cloud.TestUtilizeNode.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([A100ED69420B6C37:2954D2B3ECF701CF]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
  

Re: Cleaning up the code

2018-02-01 Thread Erick Erickson
Great, thanks! Half the battle is often knowing it's possible

On Thu, Feb 1, 2018 at 10:48 AM, Robert Muir  wrote:
> Yes it is important to do it incrementally.
>
> You can override ant properties in specific modules, by defining them
> to be something stricter before importing them from somewhere else.
> For example, analysis/icu has no warnings, so override the default
> javac.args at the very top, before it "imports" the rest of the build.
>
> diff --git a/lucene/analysis/icu/build.xml b/lucene/analysis/icu/build.xml
>
> +  
>
>
> Now the build will fail if someone introduces a warning into that
> module. and you can move on to the next one.
> Quick and dirty explanation but you get the idea...
>
>
> On Thu, Feb 1, 2018 at 1:34 PM, Erick Erickson  
> wrote:
>> You've seen a couple of JIRAs about cleaning up precommit lint
>> warnings, and Robert mentioned compiler warnings which I totally agree
>> with.
>>
>> It's just a _lot_ of code to wade through, and it'll be depressing to
>> fix up something and have similar stuff creep back in. It'll take
>> quite some time to clean them _all_ up, so there'll be quite a window
>> for backsliding.
>>
>> Is there a way to have precommit and build fail for specific parts of
>> the code? Let's claim we fix up all the precommit warnings in
>> .../solr/core. It'd feel a lot more like progress if we could then
>> have any _future_ precommit lint warnings in .../solr/core fail
>> precommit even if, say, .../solr/contrib still had a bunch.
>>
>> Ditto for compiler warnings etc. As you can tell, I'm looking for help
>> ;) Or even better "it's already in place if you'd just look"
>>
>> How this plays with different versions of Java I don't quite know. We
>> may run into some stuff with Java 9 that weren't reported in Java 8
>> and the like so I suspect turning failures on and off based on the JDK
>> might be necessary.
>>
>> Erick
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-02-01 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man resolved SOLR-11916.
-
   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

thanks for the feedback folks.

> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11916.patch, SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" docValues="true" stored="false" multiValued="false"/>
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Cleaning up the code

2018-02-01 Thread Robert Muir
Yes it is important to do it incrementally.

You can override ant properties in specific modules, by defining them
to be something stricter before importing them from somewhere else.
For example, analysis/icu has no warnings, so override the default
javac.args at the very top, before it "imports" the rest of the build.

diff --git a/lucene/analysis/icu/build.xml b/lucene/analysis/icu/build.xml
   
+  
   

Now the build will fail if someone introduces a warning into that
module. and you can move on to the next one.
Quick and dirty explanation but you get the idea...


On Thu, Feb 1, 2018 at 1:34 PM, Erick Erickson  wrote:
> You've seen a couple of JIRAs about cleaning up precommit lint
> warnings, and Robert mentioned compiler warnings which I totally agree
> with.
>
> It's just a _lot_ of code to wade through, and it'll be depressing to
> fix up something and have similar stuff creep back in. It'll take
> quite some time to clean them _all_ up, so there'll be quite a window
> for backsliding.
>
> Is there a way to have precommit and build fail for specific parts of
> the code? Let's claim we fix up all the precommit warnings in
> .../solr/core. It'd feel a lot more like progress if we could then
> have any _future_ precommit lint warnings in .../solr/core fail
> precommit even if, say, .../solr/contrib still had a bunch.
>
> Ditto for compiler warnings etc. As you can tell, I'm looking for help
> ;) Or even better "it's already in place if you'd just look"
>
> How this plays with different versions of Java I don't quite know. We
> may run into some stuff with Java 9 that weren't reported in Java 8
> and the like so I suspect turning failures on and off based on the JDK
> might be necessary.
>
> Erick
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11916) new SortableTextField using docValues built from the original string input

2018-02-01 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349106#comment-16349106
 ] 

ASF subversion and git services commented on SOLR-11916:


Commit fb0e04e5bc0e79eb137e2e7944a2933a19163c35 in lucene-solr's branch 
refs/heads/branch_7x from Chris Hostetter
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fb0e04e ]

SOLR-11916: new SortableTextField which supports analysis/searching just like 
TextField, but also sorting/faceting just like StrField

(cherry picked from commit 95122e14481a4dd623e184ca261f8bf158fd3a7c)


> new SortableTextField using docValues built from the original string input
> --
>
> Key: SOLR-11916
> URL: https://issues.apache.org/jira/browse/SOLR-11916
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Hoss Man
>Assignee: Hoss Man
>Priority: Major
> Attachments: SOLR-11916.patch, SOLR-11916.patch
>
>
> I propose adding a new SortableTextField subclass that would functionally 
> work the same as TextField except:
>  * {{docValues="true|false"}} could be configured, with the default being 
> "true"
>  * The docValues would contain the original input values (just like StrField) 
> for sorting (or faceting)
>  ** By default, to protect users from excessively large docValues, only the 
> first 1024 of each field value would be used – but this could be overridden 
> with configuration.
> 
> Consider the following sample configuration:
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> 
>   
>...
>   
>   
>...
>   
> 
> {code}
> Given a document with a title of "Solr In Action"
> Users could:
>  * Search for individual (indexed) terms in the "title" field: 
> {{q=title:solr}}
>  * Sort documents by title ( {{sort=title asc}} ) such that this document's 
> sort value would be "Solr In Action"
> If another document had a "title" value that was longer then 1024 chars, then 
> the docValues would be built using only the first 1024 characters of the 
> value (unless the user modified the configuration)
> This would be functionally equivalent to the following existing configuration 
> - including the on disk index segments - except that the on disk DocValues 
> would refer directly to the "title" field, reducing the total number of 
> "field infos" in the index (which has a small impact on segment housekeeping 
> and merge times) and end users would not need to sort on an alternate 
> "title_string" field name - the original "title" field name would always be 
> used directly.
> {code:java}
> indexed="true" docValues="true" stored="true" multiValued="false"/>
> indexed="false" docValues="true" stored="false" multiValued="false"/>
> 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11344) Fix copyOfRange Stream Evaluator so the end index parameter can be equal to the length array

2018-02-01 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16349082#comment-16349082
 ] 

Cassandra Targett commented on SOLR-11344:
--

[~joel.bernstein], Maybe this one could also be resolved, or is there more to 
fix?

> Fix copyOfRange Stream Evaluator so the end index parameter can be equal to 
> the length array
> 
>
> Key: SOLR-11344
> URL: https://issues.apache.org/jira/browse/SOLR-11344
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Priority: Major
> Attachments: SOLR-11344.patch
>
>
> This was broken during the refactoring of the Stream Evaluators. The issue is 
> that bounds checking was added which requires the end index to be less than 
> the size of the array being copied. But since the copy is exclusive of the 
> end index it needs to be the size of the array being copied or the last value 
> in the array won't be copied.
> A test case will be added to cover this scenario.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11364) Fields with useDocValuesAsStored=false never be returned in case of pattern matching

2018-02-01 Thread Cassandra Targett (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-11364:
-
Component/s: Schema and Analysis

> Fields with useDocValuesAsStored=false never be returned in case of pattern 
> matching
> 
>
> Key: SOLR-11364
> URL: https://issues.apache.org/jira/browse/SOLR-11364
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Schema and Analysis
>Reporter: Cao Manh Dat
>Priority: Major
>
> As [~dsmiley] pointed out in SOLR-8344
> bq. in calcDocValueFieldsForReturn. If fl=foo*,dvField and dvField has 
> useDocValuesAsStored=false then the code won't return dvField even though 
> it's been explicitly mentioned.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Cleaning up the code

2018-02-01 Thread Erick Erickson
You've seen a couple of JIRAs about cleaning up precommit lint
warnings, and Robert mentioned compiler warnings which I totally agree
with.

It's just a _lot_ of code to wade through, and it'll be depressing to
fix up something and have similar stuff creep back in. It'll take
quite some time to clean them _all_ up, so there'll be quite a window
for backsliding.

Is there a way to have precommit and build fail for specific parts of
the code? Let's claim we fix up all the precommit warnings in
.../solr/core. It'd feel a lot more like progress if we could then
have any _future_ precommit lint warnings in .../solr/core fail
precommit even if, say, .../solr/contrib still had a bunch.

Ditto for compiler warnings etc. As you can tell, I'm looking for help
;) Or even better "it's already in place if you'd just look"

How this plays with different versions of Java I don't quite know. We
may run into some stuff with Java 9 that weren't reported in Java 8
and the like so I suspect turning failures on and off based on the JDK
might be necessary.

Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   >