[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4871 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4871/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([7371A4478FC1FB8A:4AFF1D07A03E3274]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:316)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 12519 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
   [junit4]   2> 291931 INFO  
(SUITE-IndexSizeTriggerTest-seed#[7371A4478FC1FB8A]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2893 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2893/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

22 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FBFB3B825D94F7D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestCloudRecovery

Error Message:
Could not find collection:collection1

Stack Trace:
java.lang.AssertionError: Could not find collection:collection1
at __randomizedtesting.SeedInfo.seed([FBFB3B825D94F7D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.TestCloudRecovery.setupCluster(TestCloudRecovery.java:70)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224133917
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
+initCore("solrconfig-block-atomic-update.xml", "schema-nest.xml"); // 
use "nest" schema
+  }
+
+  @AfterClass
+  public static void afterTests() throws Exception {
+// restore enable.update.log
+System.setProperty("enable.update.log", 
PREVIOUS_ENABLE_UPDATE_LOG_VALUE);
+  }
+
+  @Before
+  public void before() {
+clearIndex();
+assertU(commit());
+  }
+
+  @Test
+  public void testMergeChildDoc() throws Exception {
+SolrInputDocument doc = new SolrInputDocument();
+doc.setField("id", "1");
+doc.setField("cat_ss", new String[]{"aaa", "ccc"});
+doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has child docs
+assertTrue(block.containsKey("child"));
+
+assertJQ(req("q","id:1")
+,"/response/numFound==0"
+);
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+block = RealTimeGetComponent.getInputDocument(core, rootDocId, true);
+block.removeField(VERSION);
+SolrInputDocument preMergeDoc = new SolrInputDocument(block);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, block);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), block.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, block);
+assertDocContainsSubset(addedDoc, block);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
block.getFieldValues("child")).get(1));
+assertEquals(doc.getFieldValue("id"), block.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicAdd() throws Exception {
+
+SolrInputDocument doc = sdoc("id", "1",
+"cat_ss", new String[] {"aaa", "ccc"},
+"child1", sdoc("id", "2", "cat_ss", "child")
+);
+json(doc);
+addDoc(adoc(doc), "nested-rtg");
--- End diff --

Maybe the default chain in this config should have those URPs, and 
therefore we wouldn't need to have the tests refer to the URP chain.


---


[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224127559
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
--- End diff --

The `solrconfig-block-atomic-update.xml` file is not toggled by this system 
property (perhaps others are).  Why set this system property?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r223747918
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java
 ---
@@ -442,5 +442,58 @@ protected void doRemoveRegex(SolrInputDocument toDoc, 
SolrInputField sif, Object
 }
 return patterns;
   }
+
+  private Object getNativeFieldValue(String fieldName, Object val) {
+if(isChildDoc(val)) {
+  return val;
+}
+SchemaField sf = schema.getField(fieldName);
+return sf.getType().toNativeType(val);
+  }
+
+  private static boolean isChildDoc(Object obj) {
+if(!(obj instanceof Collection)) {
+  return obj instanceof SolrDocumentBase;
+}
+Collection objValues = (Collection) obj;
+if(objValues.size() == 0) {
+  return false;
+}
+return objValues.iterator().next() instanceof SolrDocumentBase;
+  }
+
+  private void removeObj(Collection original, Object toRemove, String 
fieldName) {
+if(isChildDoc(toRemove)) {
+  removeChildDoc(original, (SolrInputDocument) toRemove);
+} else {
+  original.remove(getNativeFieldValue(fieldName, toRemove));
+}
+  }
+
+  private static void removeChildDoc(Collection original, 
SolrInputDocument docToRemove) {
+for(SolrInputDocument doc: (Collection) original) {
+  if(isDerivedFromDoc(doc, docToRemove)) {
+original.remove(doc);
+return;
+  }
+}
+  }
+
+  /**
+   *
+   * @param fullDoc the document to be tested
+   * @param subDoc the sub document that should be a subset of fullDoc
+   * @return whether subDoc is a subset of fullDoc
+   */
+  private static boolean isDerivedFromDoc(SolrInputDocument fullDoc, 
SolrInputDocument subDoc) {
+for(SolrInputField subSif: subDoc) {
+  String fieldName = subSif.getName();
+  if(!fullDoc.containsKey(fieldName)) return false;
--- End diff --

This results in a double-lookup of the values with the next line.  Remove 
this line and after the next one simply do a null-check.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224306158
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java 
---
@@ -639,12 +650,30 @@ public static SolrInputDocument 
getInputDocument(SolrCore core, BytesRef idBytes
   sid = new SolrInputDocument();
 } else {
   Document luceneDocument = docFetcher.doc(docid);
-  sid = toSolrInputDocument(luceneDocument, 
core.getLatestSchema());
+  sid = toSolrInputDocument(luceneDocument, schema);
 }
-if (onlyTheseNonStoredDVs != null) {
-  docFetcher.decorateDocValueFields(sid, docid, 
onlyTheseNonStoredDVs);
-} else {
-  docFetcher.decorateDocValueFields(sid, docid, 
docFetcher.getNonStoredDVsWithoutCopyTargets());
+ensureDocDecorated(onlyTheseNonStoredDVs, sid, docid, docFetcher, 
resolveBlock || schema.hasExplicitField(IndexSchema.NEST_PATH_FIELD_NAME));
+SolrInputField rootField;
+if(resolveBlock && schema.isUsableForChildDocs() && (rootField = 
sid.getField(IndexSchema.ROOT_FIELD_NAME))!=null) {
+  // doc is part of a nested structure
+  ModifiableSolrParams params = new ModifiableSolrParams()
+  .set("q", 
core.getLatestSchema().getUniqueKeyField().getName()+ ":" 
+rootField.getFirstValue())
--- End diff --

It seems the LocalSolrQueryRequest here is a dummy needed to satisfy some 
of the methods below.  This threw me; there should be comments and/or choice of 
var names (e.g. dummyReq) to reflect this.  "q" isn't needed; just the "fl".  
It seems we don't even need the "fl" here since that can be supplied as the 
first parameter to SolrReturnFields, which seems better if it works.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224306892
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java
 ---
@@ -1360,24 +1385,47 @@ boolean getUpdatedDocument(AddUpdateCommand cmd, 
long versionOnUpdate) throws IO
 }
 
 // full (non-inplace) atomic update
+final boolean isNestedSchema = req.getSchema().isUsableForChildDocs();
 SolrInputDocument sdoc = cmd.getSolrInputDocument();
 BytesRef id = cmd.getIndexedId();
-SolrInputDocument oldDoc = 
RealTimeGetComponent.getInputDocument(cmd.getReq().getCore(), id);
+SolrInputDocument blockDoc = 
RealTimeGetComponent.getInputDocument(cmd.getReq().getCore(), id, null,
+false, null, true, true);
 
-if (oldDoc == null) {
-  // create a new doc by default if an old one wasn't found
-  if (versionOnUpdate <= 0) {
-oldDoc = new SolrInputDocument();
-  } else {
+if (blockDoc == null) {
+  if (versionOnUpdate > 0) {
 // could just let the optimistic locking throw the error
 throw new SolrException(ErrorCode.CONFLICT, "Document not found 
for update.  id=" + cmd.getPrintableId());
   }
 } else {
-  oldDoc.remove(CommonParams.VERSION_FIELD);
+  blockDoc.remove(CommonParams.VERSION_FIELD);
 }
 
 
-cmd.solrDoc = docMerger.merge(sdoc, oldDoc);
+SolrInputDocument mergedDoc;
+if(idField == null || blockDoc == null) {
+  // create a new doc by default if an old one wasn't found
+  mergedDoc = docMerger.merge(sdoc, new SolrInputDocument());
+} else {
+  if(isNestedSchema && 
req.getSchema().hasExplicitField(IndexSchema.NEST_PATH_FIELD_NAME) &&
+  blockDoc.containsKey(IndexSchema.ROOT_FIELD_NAME) && 
!id.utf8ToString().equals(blockDoc.getFieldValue(IndexSchema.ROOT_FIELD_NAME))) 
{
--- End diff --

I don't think we can assume id.utf8ToString() is correct.  I think we have 
to consult the corresponding FieldType to get the "external value".  Also, cast 
blockDoc.getFieldValue as a String to make it clear we expected it to be one.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r223747455
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java
 ---
@@ -442,5 +442,58 @@ protected void doRemoveRegex(SolrInputDocument toDoc, 
SolrInputField sif, Object
 }
 return patterns;
   }
+
+  private Object getNativeFieldValue(String fieldName, Object val) {
+if(isChildDoc(val)) {
+  return val;
+}
+SchemaField sf = schema.getField(fieldName);
+return sf.getType().toNativeType(val);
+  }
+
+  private static boolean isChildDoc(Object obj) {
+if(!(obj instanceof Collection)) {
+  return obj instanceof SolrDocumentBase;
+}
+Collection objValues = (Collection) obj;
+if(objValues.size() == 0) {
+  return false;
+}
+return objValues.iterator().next() instanceof SolrDocumentBase;
+  }
+
+  private void removeObj(Collection original, Object toRemove, String 
fieldName) {
+if(isChildDoc(toRemove)) {
+  removeChildDoc(original, (SolrInputDocument) toRemove);
+} else {
+  original.remove(getNativeFieldValue(fieldName, toRemove));
+}
+  }
+
+  private static void removeChildDoc(Collection original, 
SolrInputDocument docToRemove) {
+for(SolrInputDocument doc: (Collection) original) {
+  if(isDerivedFromDoc(doc, docToRemove)) {
+original.remove(doc);
+return;
+  }
+}
+  }
+
+  /**
+   *
+   * @param fullDoc the document to be tested
+   * @param subDoc the sub document that should be a subset of fullDoc
+   * @return whether subDoc is a subset of fullDoc
+   */
+  private static boolean isDerivedFromDoc(SolrInputDocument fullDoc, 
SolrInputDocument subDoc) {
+for(SolrInputField subSif: subDoc) {
+  String fieldName = subSif.getName();
+  if(!fullDoc.containsKey(fieldName)) return false;
+  Collection fieldValues = fullDoc.getFieldValues(fieldName);
+  if(fieldValues.size() < subSif.getValueCount()) return false;
+  
if(!fullDoc.getFieldValues(fieldName).containsAll(subSif.getValues())) return 
false;
--- End diff --

`fullDoc.getFieldValues(fieldName)` on this line can be replaced by 
`fieldValues` since we already have the values.  And the previous line on the 
count is unnecessary since the containsAll check on this line would fail.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224307768
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java
 ---
@@ -1360,24 +1385,47 @@ boolean getUpdatedDocument(AddUpdateCommand cmd, 
long versionOnUpdate) throws IO
 }
 
 // full (non-inplace) atomic update
+final boolean isNestedSchema = req.getSchema().isUsableForChildDocs();
 SolrInputDocument sdoc = cmd.getSolrInputDocument();
 BytesRef id = cmd.getIndexedId();
-SolrInputDocument oldDoc = 
RealTimeGetComponent.getInputDocument(cmd.getReq().getCore(), id);
+SolrInputDocument blockDoc = 
RealTimeGetComponent.getInputDocument(cmd.getReq().getCore(), id, null,
+false, null, true, true);
 
-if (oldDoc == null) {
-  // create a new doc by default if an old one wasn't found
-  if (versionOnUpdate <= 0) {
-oldDoc = new SolrInputDocument();
-  } else {
+if (blockDoc == null) {
+  if (versionOnUpdate > 0) {
 // could just let the optimistic locking throw the error
 throw new SolrException(ErrorCode.CONFLICT, "Document not found 
for update.  id=" + cmd.getPrintableId());
   }
 } else {
-  oldDoc.remove(CommonParams.VERSION_FIELD);
+  blockDoc.remove(CommonParams.VERSION_FIELD);
 }
 
 
-cmd.solrDoc = docMerger.merge(sdoc, oldDoc);
+SolrInputDocument mergedDoc;
+if(idField == null || blockDoc == null) {
+  // create a new doc by default if an old one wasn't found
+  mergedDoc = docMerger.merge(sdoc, new SolrInputDocument());
+} else {
+  if(isNestedSchema && 
req.getSchema().hasExplicitField(IndexSchema.NEST_PATH_FIELD_NAME) &&
+  blockDoc.containsKey(IndexSchema.ROOT_FIELD_NAME) && 
!id.utf8ToString().equals(blockDoc.getFieldValue(IndexSchema.ROOT_FIELD_NAME))) 
{
+SolrInputDocument oldDoc = 
RealTimeGetComponent.getInputDocument(cmd.getReq().getCore(), id, null,
+false, null, true, false);
+mergedDoc = docMerger.merge(sdoc, oldDoc);
+String docPath = (String) 
mergedDoc.getFieldValue(IndexSchema.NEST_PATH_FIELD_NAME);
+List docPaths = StrUtils.splitSmart(docPath, 
PATH_SEP_CHAR);
+SolrInputField replaceDoc = 
blockDoc.getField(docPaths.remove(0).replaceAll(PATH_SEP_CHAR + "|" + 
NUM_SEP_CHAR, ""));
--- End diff --

The logic here (not just this line) is non-obvious to me.  There are no 
comments.  Please add comments and maybe refactor out a method.  The use of 
replaceAll with a regexp is suspicious to me.  None of the tests you added 
triggered a breakpoint within the docPaths loop below.  Needs more testing or 
possible error.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224306195
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java 
---
@@ -661,6 +690,21 @@ public static SolrInputDocument 
getInputDocument(SolrCore core, BytesRef idBytes
 return sid;
   }
 
+  private static void ensureDocDecorated(Set 
onlyTheseNonStoredDVs, SolrDocumentBase doc, int docid, SolrDocumentFetcher 
docFetcher) throws IOException {
--- End diff --

I suggest renaming these methods `ensureDocDecorated` since it's what it 
calls.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224209409
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
+initCore("solrconfig-block-atomic-update.xml", "schema-nest.xml"); // 
use "nest" schema
+  }
+
+  @AfterClass
+  public static void afterTests() throws Exception {
+// restore enable.update.log
+System.setProperty("enable.update.log", 
PREVIOUS_ENABLE_UPDATE_LOG_VALUE);
+  }
+
+  @Before
+  public void before() {
+clearIndex();
+assertU(commit());
+  }
+
+  @Test
+  public void testMergeChildDoc() throws Exception {
+SolrInputDocument doc = new SolrInputDocument();
+doc.setField("id", "1");
+doc.setField("cat_ss", new String[]{"aaa", "ccc"});
+doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has child docs
+assertTrue(block.containsKey("child"));
+
+assertJQ(req("q","id:1")
+,"/response/numFound==0"
+);
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+block = RealTimeGetComponent.getInputDocument(core, rootDocId, true);
+block.removeField(VERSION);
+SolrInputDocument preMergeDoc = new SolrInputDocument(block);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, block);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), block.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, block);
+assertDocContainsSubset(addedDoc, block);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
block.getFieldValues("child")).get(1));
+assertEquals(doc.getFieldValue("id"), block.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicAdd() throws Exception {
+
+SolrInputDocument doc = sdoc("id", "1",
+"cat_ss", new String[] {"aaa", "ccc"},
+"child1", sdoc("id", "2", "cat_ss", "child")
+);
+json(doc);
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has 

[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224283996
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java 
---
@@ -609,9 +618,10 @@ public static SolrInputDocument 
getInputDocument(SolrCore core, BytesRef idBytes
* @param resolveFullDocument In case the document is fetched from the 
tlog, it could only be a partial document if the last update
*  was an in-place update. In that case, should this 
partial document be resolved to a full document (by following
*  back prevPointer/prevVersion)?
+   * @param resolveBlock Check whether the document is part of a block. If 
so, return the whole block.
*/
   public static SolrInputDocument getInputDocument(SolrCore core, BytesRef 
idBytes, AtomicLong versionReturned, boolean avoidRetrievingStoredFields,
-  Set onlyTheseNonStoredDVs, boolean resolveFullDocument) 
throws IOException {
+  Set onlyTheseNonStoredDVs, boolean resolveFullDocument, 
boolean resolveBlock) throws IOException {
 SolrInputDocument sid = null;
--- End diff --

It would be helpful to add a javadoc comment to say that if the id refers 
to a nested document (which isn't known up-front), then it'll never be found in 
the tlog (at least if you follow the rules of nested docs).  Also, perhaps the 
parameter "resolveBlock" should be "resolveToRootDocument" since I think the 
"root" terminology may be more widely used as it's even in the schema, whereas 
"block" is I think not so much.  If you disagree, a compromise may be to use 
both "root" and "Block" together -- "resolveRootBlock".


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224300831
  
--- Diff: 
solr/core/src/java/org/apache/solr/handler/component/RealTimeGetComponent.java 
---
@@ -639,12 +650,30 @@ public static SolrInputDocument 
getInputDocument(SolrCore core, BytesRef idBytes
   sid = new SolrInputDocument();
 } else {
   Document luceneDocument = docFetcher.doc(docid);
-  sid = toSolrInputDocument(luceneDocument, 
core.getLatestSchema());
+  sid = toSolrInputDocument(luceneDocument, schema);
 }
-if (onlyTheseNonStoredDVs != null) {
-  docFetcher.decorateDocValueFields(sid, docid, 
onlyTheseNonStoredDVs);
-} else {
-  docFetcher.decorateDocValueFields(sid, docid, 
docFetcher.getNonStoredDVsWithoutCopyTargets());
+ensureDocDecorated(onlyTheseNonStoredDVs, sid, docid, docFetcher, 
resolveBlock || schema.hasExplicitField(IndexSchema.NEST_PATH_FIELD_NAME));
+SolrInputField rootField;
--- End diff --

no big deal to simply initialize rootField up front.  You are doing it as 
an expression with a side-effect below which is needlessly awkward.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224130315
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
+initCore("solrconfig-block-atomic-update.xml", "schema-nest.xml"); // 
use "nest" schema
+  }
+
+  @AfterClass
+  public static void afterTests() throws Exception {
+// restore enable.update.log
+System.setProperty("enable.update.log", 
PREVIOUS_ENABLE_UPDATE_LOG_VALUE);
+  }
+
+  @Before
+  public void before() {
+clearIndex();
+assertU(commit());
+  }
+
+  @Test
+  public void testMergeChildDoc() throws Exception {
+SolrInputDocument doc = new SolrInputDocument();
+doc.setField("id", "1");
+doc.setField("cat_ss", new String[]{"aaa", "ccc"});
+doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has child docs
+assertTrue(block.containsKey("child"));
+
+assertJQ(req("q","id:1")
+,"/response/numFound==0"
+);
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+block = RealTimeGetComponent.getInputDocument(core, rootDocId, true);
+block.removeField(VERSION);
+SolrInputDocument preMergeDoc = new SolrInputDocument(block);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, block);
--- End diff --

It seems the point of this test is to test 
AtomicUpdateDocumentMerger.merge?  Then why even index anything at all (the 
first half of this test)?  Simply create the documents directly and call the 
merge method and test the result.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r223749656
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/DistributedUpdateProcessor.java
 ---
@@ -111,6 +114,8 @@
   public static final String DISTRIB_FROM = "distrib.from";
   public static final String DISTRIB_INPLACE_PREVVERSION = 
"distrib.inplace.prevversion";
   private static final String TEST_DISTRIB_SKIP_SERVERS = 
"test.distrib.skip.servers";
+  private static final char PATH_SEP_CHAR = '/';
--- End diff --

Please don't create frivolous constants like this.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224131263
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
+initCore("solrconfig-block-atomic-update.xml", "schema-nest.xml"); // 
use "nest" schema
+  }
+
+  @AfterClass
+  public static void afterTests() throws Exception {
+// restore enable.update.log
+System.setProperty("enable.update.log", 
PREVIOUS_ENABLE_UPDATE_LOG_VALUE);
+  }
+
+  @Before
+  public void before() {
+clearIndex();
+assertU(commit());
+  }
+
+  @Test
+  public void testMergeChildDoc() throws Exception {
+SolrInputDocument doc = new SolrInputDocument();
+doc.setField("id", "1");
+doc.setField("cat_ss", new String[]{"aaa", "ccc"});
+doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has child docs
+assertTrue(block.containsKey("child"));
+
+assertJQ(req("q","id:1")
+,"/response/numFound==0"
+);
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+block = RealTimeGetComponent.getInputDocument(core, rootDocId, true);
+block.removeField(VERSION);
+SolrInputDocument preMergeDoc = new SolrInputDocument(block);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, block);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), block.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, block);
+assertDocContainsSubset(addedDoc, block);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
block.getFieldValues("child")).get(1));
+assertEquals(doc.getFieldValue("id"), block.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicAdd() throws Exception {
+
+SolrInputDocument doc = sdoc("id", "1",
+"cat_ss", new String[] {"aaa", "ccc"},
+"child1", sdoc("id", "2", "cat_ss", "child")
--- End diff --

I think it'd be easier to comprehend tests involving nested documents if 
the ID for a nested document somehow looked different.  For example, for this 
child doc, do "1.1" to mean the first child doc of parent doc 1.  Second would 
be "1.2".  WDYT?


---


[GitHub] lucene-solr pull request #455: SOLR-12638

2018-10-10 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/455#discussion_r224131571
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateBlockTest.java 
---
@@ -0,0 +1,370 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.util.Arrays;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.List;
+
+import org.apache.lucene.util.BytesRef;
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.core.SolrCore;
+import org.apache.solr.handler.component.RealTimeGetComponent;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+public class AtomicUpdateBlockTest extends SolrTestCaseJ4 {
+
+  private final static String VERSION = "_version_";
+  private static String PREVIOUS_ENABLE_UPDATE_LOG_VALUE;
+
+  @BeforeClass
+  public static void beforeTests() throws Exception {
+PREVIOUS_ENABLE_UPDATE_LOG_VALUE = 
System.getProperty("enable.update.log");
+System.setProperty("enable.update.log", "true");
+initCore("solrconfig-block-atomic-update.xml", "schema-nest.xml"); // 
use "nest" schema
+  }
+
+  @AfterClass
+  public static void afterTests() throws Exception {
+// restore enable.update.log
+System.setProperty("enable.update.log", 
PREVIOUS_ENABLE_UPDATE_LOG_VALUE);
+  }
+
+  @Before
+  public void before() {
+clearIndex();
+assertU(commit());
+  }
+
+  @Test
+  public void testMergeChildDoc() throws Exception {
+SolrInputDocument doc = new SolrInputDocument();
+doc.setField("id", "1");
+doc.setField("cat_ss", new String[]{"aaa", "ccc"});
+doc.setField("child", Collections.singletonList(sdoc("id", "2", 
"cat_ss", "child")));
+addDoc(adoc(doc), "nested-rtg");
+
+BytesRef rootDocId = new BytesRef("1");
+SolrCore core = h.getCore();
+SolrInputDocument block = RealTimeGetComponent.getInputDocument(core, 
rootDocId, true);
+// assert block doc has child docs
+assertTrue(block.containsKey("child"));
+
+assertJQ(req("q","id:1")
+,"/response/numFound==0"
+);
+
+// commit the changes
+assertU(commit());
+
+SolrInputDocument newChildDoc = sdoc("id", "3", "cat_ss", "child");
+SolrInputDocument addedDoc = sdoc("id", "1",
+"cat_ss", Collections.singletonMap("add", "bbb"),
+"child", Collections.singletonMap("add", sdocs(newChildDoc)));
+block = RealTimeGetComponent.getInputDocument(core, rootDocId, true);
+block.removeField(VERSION);
+SolrInputDocument preMergeDoc = new SolrInputDocument(block);
+AtomicUpdateDocumentMerger docMerger = new 
AtomicUpdateDocumentMerger(req());
+docMerger.merge(addedDoc, block);
+assertEquals("merged document should have the same id", 
preMergeDoc.getFieldValue("id"), block.getFieldValue("id"));
+assertDocContainsSubset(preMergeDoc, block);
+assertDocContainsSubset(addedDoc, block);
+assertDocContainsSubset(newChildDoc, (SolrInputDocument) ((List) 
block.getFieldValues("child")).get(1));
+assertEquals(doc.getFieldValue("id"), block.getFieldValue("id"));
+  }
+
+  @Test
+  public void testBlockAtomicAdd() throws Exception {
+
+SolrInputDocument doc = sdoc("id", "1",
+"cat_ss", new String[] {"aaa", "ccc"},
+"child1", sdoc("id", "2", "cat_ss", "child")
+);
+json(doc);
--- End diff --

accidentally added this json line?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Funny constants in MaxMetric class

2018-10-10 Thread Gus Heck
Once I came back to actually check it out I of course realized that it's the
same as Long.MIN_VALUE
...
just harder to read at a glance, and the -Long.MIN_VALUE version gets
flagged by the IDE for numeric overflow... why not just use Long.MIN_VALUE?

On Tue, Oct 9, 2018 at 10:31 PM Gus Heck  wrote:

>
> https://github.com/apache/lucene-solr/blob/master/solr/solrj/src/java/org/apache/solr/client/solrj/io/stream/metrics/MaxMetric.java#L28
>
> Seems like the default value for longs in the MaxMetric is probably not
> right?
>
> --
> http://www.the111shift.com
>


-- 
http://www.the111shift.com


[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645914#comment-16645914
 ] 

Shalin Shekhar Mangar commented on SOLR-12739:
--

Thanks Steve. This one wasn't 100% reproducible. It breaks due to bad 
assumptions on the cluster layout. I changed the test to use the legacy 
assignment strategy because we don't necessarily need the new strategy to test 
the feature itself.

{code}
SOLR-12739: Use legacy assignment in AutoAddReplicasPlanActionTest

master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/971a0e3f
branch_7x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d921fe50
{code}

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-12-ea+12) - Build # 830 - Failure!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/830/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard

Error Message:
Could not find collection : deleteshard_test

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : 
deleteshard_test
at 
__randomizedtesting.SeedInfo.seed([8C6EDDB315F284EC:2C7496E8B182CFE0]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.DeleteShardTest.testDirectoryCleanupAfterDeleteShard(DeleteShardTest.java:114)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)




Build Log:
[...truncated 13892 lines...]
   [junit4] Suite: org.apache.solr.cloud.DeleteShardTest
   [junit4]   2> 2131464 INFO  

[jira] [Created] (SOLR-12850) SOLR Unaware When Index Has Corrupt Checksum

2018-10-10 Thread Stephen Lewis Bianamara (JIRA)
Stephen Lewis Bianamara created SOLR-12850:
--

 Summary: SOLR Unaware When Index Has Corrupt Checksum
 Key: SOLR-12850
 URL: https://issues.apache.org/jira/browse/SOLR-12850
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6.2
Reporter: Stephen Lewis Bianamara


If a SOLR node gets a corrupted checksum in its index, it will still report as 
"healthy" but all writes routing to that node will fail. Thus leading to a 
total write outage  which is totally silent server side.

 

I think this behavior should report within the cluster status. I think "Down" 
is appropriate, and then that should lead to a recovery triggering. Perhaps 
though a new category of "CorruptIndex" or something is more appropriate. 
However it happens, I believe the right response is (a) release leader lock if 
held (b) once new leader is elected, copy that leader's index

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23008 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23008/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

7 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments

Error Message:
expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([6F3ABCFBA53E5C26:AAF3AA097D5E0691]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:33369_solr, 
127.0.0.1:34545_solr, 127.0.0.1:35015_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-11) - Build # 7562 - Still unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7562/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillPullReplica

Error Message:
IOException occured when talking to server at: https://127.0.0.1:63891/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:63891/solr
at 
__randomizedtesting.SeedInfo.seed([B29B28EF8175D03:87D8AE1B58A6BC3B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.cloud.TestPullReplica.tearDown(TestPullReplica.java:116)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:993)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Comment Edited] (SOLR-12799) Allow Authentication Plugins to easily intercept internode requests

2018-10-10 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645800#comment-16645800
 ] 

Noble Paul edited comment on SOLR-12799 at 10/11/18 1:25 AM:
-

I find it strange that we have to explicitly pass the Principal along in 
{{SolrCmdDistributor}} and {{HttpShardHandler#submit()}} . These are supposed 
to be copied automatically for any ThreadPool using 
{{MDCAwareThreadPoolExecutor}}. I haven't done a full review, but this kind of 
stands out 


was (Author: noble.paul):
I'm doing it now

> Allow Authentication Plugins to easily intercept internode requests
> ---
>
> Key: SOLR-12799
> URL: https://issues.apache.org/jira/browse/SOLR-12799
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr security framework currently allows a plugin to declare statically by 
> implementing the {{HttpClientBuilderPlugin}} interface whether it will handle 
> internode requests. If it implements the interface, the plugin MUST handle 
> ALL internode requests, even requests originating from Solr itself. Likewise, 
> if a plugin does not implement the interface, ALL requests will be 
> authenticated by the built-in {{PKIAuthenticationPlugin}}.
> In some cases (such as SOLR-12121) there is a need to forward end-user 
> credentials on internode requests, but let PKI handle it for solr-originated 
> requests. This is currently not possible without a dirty hack where each 
> plugin duplicates some PKI logic and calls PKI plugin from its own 
> interceptor even if it is disabled.
> This Jira makes this use case officially supported by the framework by:
>  * Letting {{PKIAuthenticationPlugin}} be always enabled. PKI will now in its 
> interceptor on a per-request basis first give the authc plugin a chance to 
> handle the request
>  * Adding a protected method to abstract class {{AuthenticationPlugin}}
>{code:java}
> protected boolean interceptInternodeRequest(HttpRequest httpRequest, 
> HttpContext httpContext)
> {code}
> that can be overridden by plugins in order to easily intercept requests 
> without registering its own interceptor. Returning 'false' delegates to PKI.
> Existing Authc plugins do *not* need to change as a result of this, and they 
> will work exactly as before, i.e. either handle ALL or NONE internode auth.
> New plugins choosing to *override* the new {{interceptInternodeRequest}} 
> method will obtain per-request control over who will secure each request. The 
> first user of this feature will be JWT token based auth in SOLR-12121.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12799) Allow Authentication Plugins to easily intercept internode requests

2018-10-10 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645800#comment-16645800
 ] 

Noble Paul commented on SOLR-12799:
---

I'm doing it now

> Allow Authentication Plugins to easily intercept internode requests
> ---
>
> Key: SOLR-12799
> URL: https://issues.apache.org/jira/browse/SOLR-12799
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr security framework currently allows a plugin to declare statically by 
> implementing the {{HttpClientBuilderPlugin}} interface whether it will handle 
> internode requests. If it implements the interface, the plugin MUST handle 
> ALL internode requests, even requests originating from Solr itself. Likewise, 
> if a plugin does not implement the interface, ALL requests will be 
> authenticated by the built-in {{PKIAuthenticationPlugin}}.
> In some cases (such as SOLR-12121) there is a need to forward end-user 
> credentials on internode requests, but let PKI handle it for solr-originated 
> requests. This is currently not possible without a dirty hack where each 
> plugin duplicates some PKI logic and calls PKI plugin from its own 
> interceptor even if it is disabled.
> This Jira makes this use case officially supported by the framework by:
>  * Letting {{PKIAuthenticationPlugin}} be always enabled. PKI will now in its 
> interceptor on a per-request basis first give the authc plugin a chance to 
> handle the request
>  * Adding a protected method to abstract class {{AuthenticationPlugin}}
>{code:java}
> protected boolean interceptInternodeRequest(HttpRequest httpRequest, 
> HttpContext httpContext)
> {code}
> that can be overridden by plugins in order to easily intercept requests 
> without registering its own interceptor. Returning 'false' delegates to PKI.
> Existing Authc plugins do *not* need to change as a result of this, and they 
> will work exactly as before, i.e. either handle ALL or NONE internode auth.
> New plugins choosing to *override* the new {{interceptInternodeRequest}} 
> method will obtain per-request control over who will secure each request. The 
> first user of this feature will be JWT token based auth in SOLR-12121.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1663 - Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1663/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2863/consoleText

[repro] Revision: 095707d54717a745245fd2702779e02d8a46e9ce

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitAfterFailedSplit2 -Dtests.seed=62B5F353557AC082 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sq-AL 
-Dtests.timezone=America/Dominica -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.seed=62B5F353557AC082 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sq-AL -Dtests.timezone=America/Dominica -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=raceConditionOnDeleteAndRegisterReplica 
-Dtests.seed=62B5F353557AC082 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ar-BH -Dtests.timezone=America/Buenos_Aires -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=62B5F353557AC082 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-BH 
-Dtests.timezone=America/Buenos_Aires -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteLiveReplicaTest -Dtests.seed=62B5F353557AC082 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ar-BH 
-Dtests.timezone=America/Buenos_Aires -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testVersionsAreReturned -Dtests.seed=38C351BCA49A4339 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-BA 
-Dtests.timezone=America/Port_of_Spain -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
80011d669ad8883379535521acbcf9274473f8b9
[repro] git fetch
[repro] git checkout 095707d54717a745245fd2702779e02d8a46e9ce

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   ShardSplitTest
[repro]   DeleteReplicaTest
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 3424 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.ShardSplitTest|*.DeleteReplicaTest" -Dtests.showOutput=onerror 
 -Dtests.seed=62B5F353557AC082 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sq-AL -Dtests.timezone=America/Dominica -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 6073 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 454 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=38C351BCA49A4339 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sr-BA -Dtests.timezone=America/Port_of_Spain 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[...truncated 1040 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro]   1/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro]   1/5 failed: org.apache.solr.cloud.DeleteReplicaTest
[repro] git checkout 80011d669ad8883379535521acbcf9274473f8b9

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-10-10 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645761#comment-16645761
 ] 

Steve Rowe commented on LUCENE-8496:


FYI two other failing tests on branch_7x from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2891/] (before the 
commit was reverted):

{noformat}
ant test -Dtestcase=TestLucene60PointsFormat -Dtests.seed=B5A28E6677965A99 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=fr-CA 
-Dtests.timezone=Asia/Irkutsk -Dtests.asserts=true -Dtests.file.encoding=UTF-8
{noformat}

{noformat}
ant test -Dtestcase=TestAssertingPointsFormat -Dtests.seed=F280908F18AE1657 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=dz 
-Dtests.timezone=Etc/GMT-10 -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
{noformat}

> Explore selective dimension indexing in BKDReader/Writer
> 
>
> Key: LUCENE-8496
> URL: https://issues.apache.org/jira/browse/LUCENE-8496
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8496.patch, LUCENE-8496.patch, LUCENE-8496.patch, 
> LUCENE-8496.patch, LUCENE-8496.patch, LatLonShape_SelectiveEncoding.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This issue explores adding a new feature to BKDReader/Writer that enables 
> users to select a fewer number of dimensions to be used for creating the BKD 
> index than the total number of dimensions specified for field encoding. This 
> is useful for encoding dimensional data that is used for interpreting the 
> encoded field data but unnecessary (or not efficient) for creating the index 
> structure. One such example is {{LatLonShape}} encoding. The first 4 
> dimensions may be used to to efficiently search/index the triangle using its 
> precomputed bounding box as a 4D point, and the remaining dimensions can be 
> used to encode the vertices of the tessellated triangle. This causes BKD to 
> act much like an R-Tree for shape data where search is distilled into a 4D 
> point (instead of a more expensive 6D point) and the triangle is encoded 
> using a portion of the remaining (non-indexed) dimensions. Fields that use 
> the full data range for indexing are not impacted and behave as they normally 
> would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Does ConcurrentMergeScheduler actually do smaller merges first?

2018-10-10 Thread Shawn Heisey

On 10/10/2018 5:40 PM, Shawn Heisey wrote:
somebody who's intimately familiar with that code could decipher it a 
lot faster than I can. 


I went ahead and built a test class mirroring the sorting code I see in 
ConcurrentMergeScheduler (master branch), and it looks like current code 
does indeed behave as advertised.  Here's the code I built (paste has a 
one month expiration time):


https://apaste.info/CFJT

The output of that class is this, exactly what I was hoping to see:

Index: 0, Size: 4725, Pause: true
Index: 1, Size: 3725, Pause: true
Index: 2, Size: 2725, Pause: true
Index: 3, Size: 1725, Pause: true
Index: 4, Size: 725, Pause: true
Index: 5, Size: 525, Pause: false
Index: 6, Size: 25, Pause: false

The code could use more comments documenting its operation, but it does 
look like it's correct, at least in the master branch.


Looking over the commit history for the file, nothing jumped out at me 
as being a change that might have reversed the sort order, but I can say 
that Solr 1.4.x (Lucene 2.9) is where I saw the problem, and I'm 
reasonably certain that some of the reports I handled on the mailing 
list were on version 4.x.  I cannot confirm the version on more recent 
reports without checking list history.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8528) Reproducing TestFSTs.testBasicFSA() failure

2018-10-10 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated LUCENE-8528:
---
Component/s: core/FSTs

> Reproducing TestFSTs.testBasicFSA() failure
> ---
>
> Key: LUCENE-8528
> URL: https://issues.apache.org/jira/browse/LUCENE-8528
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/FSTs
>Reporter: Steve Rowe
>Priority: Major
>
> From 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/104/]:
> {noformat}
> Checking out Revision 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb 
> (refs/remotes/origin/master)
> [...]
>[junit4] Suite: org.apache.lucene.util.fst.TestFSTs
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestFSTs 
> -Dtests.method=testBasicFSA -Dtests.seed=82D30036E9484CE9 
> -Dtests.multiplier=3 -Dtests.slow=true -Dtests.badapples=true 
> -Dtests.locale=ckb-IR -Dtests.timezone=Africa/Malabo -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] FAILURE 0.18s J1 | TestFSTs.testBasicFSA <<<
>[junit4]> Throwable #1: java.lang.AssertionError: expected:<24> but 
> was:<22>
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([82D30036E9484CE9:5BAEBE18FD0445D5]:0)
>[junit4]>  at 
> org.apache.lucene.util.fst.TestFSTs.testBasicFSA(TestFSTs.java:166)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>[junit4]>  at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>[junit4]>  at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>[junit4]>  at 
> java.base/java.lang.reflect.Method.invoke(Method.java:564)
>[junit4]>  at java.base/java.lang.Thread.run(Thread.java:844)
> [...]
>[junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): 
> {date=PostingsFormat(name=Asserting), field=PostingsFormat(name=Asserting), 
> docid=PostingsFormat(name=LuceneFixedGap), 
> titleTokenized=BlockTreeOrds(blocksize=128), 
> id=PostingsFormat(name=LuceneFixedGap), body=PostingsFormat(name=Asserting), 
> title=PostingsFormat(name=LuceneVarGapFixedInterval)}, 
> docValues:{docid_intDV=DocValuesFormat(name=Lucene70), 
> titleDV=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=1415, 
> maxMBSortInHeap=5.567002115183062, 
> sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@54396619),
>  locale=ckb-IR, timezone=Africa/Malabo
>[junit4]   2> NOTE: Linux 4.15.0-36-generic amd64/Oracle Corporation 9.0.4 
> (64-bit)/cpus=8,threads=1,free=171269088,total=460849152
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8528) Reproducing TestFSTs.testBasicFSA() failure

2018-10-10 Thread Steve Rowe (JIRA)
Steve Rowe created LUCENE-8528:
--

 Summary: Reproducing TestFSTs.testBasicFSA() failure
 Key: LUCENE-8528
 URL: https://issues.apache.org/jira/browse/LUCENE-8528
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Steve Rowe


>From [https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/104/]:

{noformat}
Checking out Revision 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb 
(refs/remotes/origin/master)
[...]
   [junit4] Suite: org.apache.lucene.util.fst.TestFSTs
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestFSTs 
-Dtests.method=testBasicFSA -Dtests.seed=82D30036E9484CE9 -Dtests.multiplier=3 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ckb-IR 
-Dtests.timezone=Africa/Malabo -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.18s J1 | TestFSTs.testBasicFSA <<<
   [junit4]> Throwable #1: java.lang.AssertionError: expected:<24> but 
was:<22>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([82D30036E9484CE9:5BAEBE18FD0445D5]:0)
   [junit4]>at 
org.apache.lucene.util.fst.TestFSTs.testBasicFSA(TestFSTs.java:166)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)
[...]
   [junit4]   2> NOTE: test params are: codec=Asserting(Lucene80): 
{date=PostingsFormat(name=Asserting), field=PostingsFormat(name=Asserting), 
docid=PostingsFormat(name=LuceneFixedGap), 
titleTokenized=BlockTreeOrds(blocksize=128), 
id=PostingsFormat(name=LuceneFixedGap), body=PostingsFormat(name=Asserting), 
title=PostingsFormat(name=LuceneVarGapFixedInterval)}, 
docValues:{docid_intDV=DocValuesFormat(name=Lucene70), 
titleDV=DocValuesFormat(name=Asserting)}, maxPointsInLeafNode=1415, 
maxMBSortInHeap=5.567002115183062, 
sim=Asserting(org.apache.lucene.search.similarities.AssertingSimilarity@54396619),
 locale=ckb-IR, timezone=Africa/Malabo
   [junit4]   2> NOTE: Linux 4.15.0-36-generic amd64/Oracle Corporation 9.0.4 
(64-bit)/cpus=8,threads=1,free=171269088,total=460849152
{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 177 - Still Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/177/

8 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:35367_solr, 
127.0.0.1:36511_solr, 127.0.0.1:41091_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"raceDeleteReplica_true_shard1_replica_n2", 
  "base_url":"https://127.0.0.1:44860/solr;,   
"node_name":"127.0.0.1:44860_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"https://127.0.0.1:44860/solr;,   
"node_name":"127.0.0.1:44860_solr",   "state":"down",   
"type":"NRT"}, "core_node3":{   
"core":"raceDeleteReplica_true_shard1_replica_n1",   
"base_url":"https://127.0.0.1:41091/solr;,   
"node_name":"127.0.0.1:41091_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:35367_solr, 127.0.0.1:36511_solr, 127.0.0.1:41091_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"https://127.0.0.1:44860/solr;,
  "node_name":"127.0.0.1:44860_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"https://127.0.0.1:44860/solr;,
  "node_name":"127.0.0.1:44860_solr",
  "state":"down",
  "type":"NRT"},
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"https://127.0.0.1:41091/solr;,
  "node_name":"127.0.0.1:41091_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([B33F82ED7818218B:D929E33D10EA6B41]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:222)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-11) - Build # 2892 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2892/
Java: 64bit/jdk-11 -XX:-UseCompressedOops -XX:+UseG1GC

26 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([CA051163B06E1543]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest: 1) 
Thread[id=1590, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-LargeVolumeBinaryJettyTest] at 
java.base@11/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11/java.lang.Thread.run(Thread.java:834)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest: 
   1) Thread[id=1590, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-LargeVolumeBinaryJettyTest]
at java.base@11/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11/java.lang.Thread.run(Thread.java:834)
at __randomizedtesting.SeedInfo.seed([ECB38001DF33E22]:0)


FAILED:  
org.apache.solr.client.solrj.embedded.LargeVolumeBinaryJettyTest.testMultiThreaded

Error Message:
Captured an uncaught exception in thread: Thread[id=1589, name=DocThread-0, 
state=RUNNABLE, group=TGRP-LargeVolumeBinaryJettyTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=1589, name=DocThread-0, state=RUNNABLE, 
group=TGRP-LargeVolumeBinaryJettyTest]
Caused by: java.lang.AssertionError: DocThread-0---IOException occured when 
talking to server at: https://127.0.0.1:35067/solr/collection1
at 

[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645725#comment-16645725
 ] 

Steve Rowe commented on SOLR-12739:
---

\@BadApple'd {{AutoAddReplicasPlanActionTest.testSimple()}} is also failing 
without a seed, and the first (consistently) failing master commit is 
{{dbed8ba}} on this issue. E.g. from 
[https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/104/]):

{noformat}
Checking out Revision 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb 
(refs/remotes/origin/master)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=AutoAddReplicasPlanActionTest -Dtests.method=testSimple 
-Dtests.seed=AEB09D2F3B1B1BA6 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=yo-BJ -Dtests.timezone=Etc/GMT-7 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] FAILURE 15.5s J1 | AutoAddReplicasPlanActionTest.testSimple <<<
   [junit4]> Throwable #1: org.junit.ComparisonFailure: Target node is not 
as expectation expected:<127.0.0.1:[37029]_solr> but 
was:<127.0.0.1:[40937]_solr>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([AEB09D2F3B1B1BA6:9603B9D11CE8CF77]:0)
   [junit4]>at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.assertOperations(AutoAddReplicasPlanActionTest.java:191)
   [junit4]>at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple(AutoAddReplicasPlanActionTest.java:123)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)
{noformat}

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Does ConcurrentMergeScheduler actually do smaller merges first?

2018-10-10 Thread Shawn Heisey

On 10/10/2018 11:52 AM, Michael Sokolov wrote:
If maxMergeCount was 2, you could get into a situation with three 
large merges I think; the largest would be paused, but the others 
could still take > 10 mins to complete. Are you sure that your 
observation is at odds with what the document says the scheduler is doing?


I haven't done extremely comprehensive checking, and it has been a 
number of years now.  When I was looking, what appeared to be happening 
was three merges scheduled.  The smallest one I would expect to complete 
in seconds, or certainly within a few minutes. The largest one was 
probably at the merge policy's 5GB max segment size, and a merge of that 
size would definitely take longer than ten minutes.  I no longer have 
access to those indexes, so I can't investigate directly.


There are still new reports on solr-user of database connections failing 
while importing millions of rows, even recently.  I have NOT heard about 
anyone applying my fix (set maxMergeCount to 6) and still seeing 
failures, but I suppose that might have happened.


It is the recent reports of the problem that has prompted me to 
investigate deeper and start this thread.  I believe that the merge 
scheduler SHOULD handle smaller merges first, just like the javadocs 
indicate, but I have seen evidence (at least in the past) that it's not 
actually doing so.  My look at the code today seems to indicate that it 
is sorting large merges first, but somebody who's intimately familiar 
with that code could decipher it a lot faster than I can.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23007 - Still unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23007/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([771751E1CA9696F8]:0)


FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([771751E1CA9696F8]:0)




Build Log:
[...truncated 15454 lines...]
   [junit4] Suite: 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue
   [junit4]   2> 2121370 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[771751E1CA9696F8]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue_771751E1CA9696F8-001/init-core-data-001
   [junit4]   2> 2121370 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[771751E1CA9696F8]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 2121371 INFO  
(SUITE-TestSimGenericDistributedQueue-seed#[771751E1CA9696F8]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (true) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 2121373 INFO  
(TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[771751E1CA9696F8]) 
[] o.a.s.SolrTestCaseJ4 ###Starting testLocallyOffer
   [junit4]   2> 2121489 INFO  
(TEST-TestSimGenericDistributedQueue.testLocallyOffer-seed#[771751E1CA9696F8]) 
[] o.a.s.SolrTestCaseJ4 ###Ending testLocallyOffer
   [junit4]   2> 2121491 INFO  
(TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[771751E1CA9696F8])
 [] o.a.s.SolrTestCaseJ4 ###Starting testDistributedQueue
   [junit4]   2> oct 11, 2018 1:32:02 AM 
com.carrotsearch.randomizedtesting.ThreadLeakControl$2 evaluate
   [junit4]   2> ADVERTENCIA: Suite execution timed out: 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue
   [junit4]   2>  jstack at approximately timeout time 
   [junit4]   2> 
"TEST-TestSimGenericDistributedQueue.testDistributedQueue-seed#[771751E1CA9696F8]"
 ID=22527 TIMED_WAITING on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@2c2b8691
   [junit4]   2>at sun.misc.Unsafe.park(Native Method)
   [junit4]   2>- timed waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@2c2b8691
   [junit4]   2>at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
   [junit4]   2>at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
   [junit4]   2>at 
org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:194)
   [junit4]   2>at 
org.apache.solr.cloud.autoscaling.sim.GenericDistributedQueue.peek(GenericDistributedQueue.java:167)
   [junit4]   2>at 
org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueue(TestSimDistributedQueue.java:74)
   [junit4]   2>at 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueue(TestSimGenericDistributedQueue.java:36)
   [junit4]   2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
   [junit4]   2>at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]   2>at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]   2>at java.lang.reflect.Method.invoke(Method.java:498)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
   [junit4]   2>at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
   [junit4]   2>at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
   [junit4]   2>

[jira] [Commented] (SOLR-12849) collection parameter referencing an alias being handled differently when sent as GET than when sent as POST

2018-10-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645531#comment-16645531
 ] 

Shawn Heisey commented on SOLR-12849:
-

Some additional info, not sure whether these are problems or expected.  I added 
documents with id values of "1" and "2" to the bar collection, and documents 
with id values of "3" and "4" to the bar2 collection.  The foo and foo2 
collections remain empty.

This command returns numFound=0 (thought maybe it should be 2):
{noformat}
curl -XPOST -s localhost:8983/solr/f_m/select -d 'q=*:*=b_s'
{noformat}

This command returns numFound=4:
{noformat}
rl -XPOST -s localhost:8983/solr/b_m/select -d 'q=*:*=b_s'
{noformat}

Which tells me that even when the presence of a collection parameter doesn't 
cause an error, it is being ignored.


> collection parameter referencing an alias being handled differently when sent 
> as GET than when sent as POST
> ---
>
> Key: SOLR-12849
> URL: https://issues.apache.org/jira/browse/SOLR-12849
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Major
>
> This is a weird one.
> Fired up a cloud example, server built from branch_7x code (7.6.0-SNAPSHOT)
> Created four collections with the _default configset -- foo, foo2, bar, bar2.
> Created four aliases, here's the contents of aliases.json:
> {noformat}
> {"collection":{
> "b_s":"bar",
> "b_m":"bar2,bar",
> "f_s":"foo",
> "f_m":"foo2,foo"}}
> {noformat}
> This curl command will fail, with "Could not find collection : b_s" as the 
> message:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_s/select -d 'q=*:*=0=b_s'
> {noformat}
> That seems like a bug, but it's not the whole story.  Here's where things get 
> weird.  The following two commands will NOT fail:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_m/select -d 'q=*:*=0=b_m'
> curl -XGET -s "localhost:8983/solr/b_s/select?q=*:*=0=b_s"
> {noformat}
> The first one is very similar to the one that fails, except it uses an alias 
> pointing at multiple collections instead of an alias pointing at one 
> collection.  The second is effectively identical to the command that fails, 
> except it's a GET rather than a POST -- the parameters are part of the URL 
> rather than being specified in the request body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12849) collection parameter referencing an alias being handled differently when sent as GET than when sent as POST

2018-10-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645524#comment-16645524
 ] 

Shawn Heisey commented on SOLR-12849:
-

If the collection parameter is removed entirely from the command that fails, it 
works.

> collection parameter referencing an alias being handled differently when sent 
> as GET than when sent as POST
> ---
>
> Key: SOLR-12849
> URL: https://issues.apache.org/jira/browse/SOLR-12849
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Major
>
> This is a weird one.
> Fired up a cloud example, server built from branch_7x code (7.6.0-SNAPSHOT)
> Created four collections with the _default configset -- foo, foo2, bar, bar2.
> Created four aliases, here's the contents of aliases.json:
> {noformat}
> {"collection":{
> "b_s":"bar",
> "b_m":"bar2,bar",
> "f_s":"foo",
> "f_m":"foo2,foo"}}
> {noformat}
> This curl command will fail, with "Could not find collection : b_s" as the 
> message:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_s/select -d 'q=*:*=0=b_s'
> {noformat}
> That seems like a bug, but it's not the whole story.  Here's where things get 
> weird.  The following two commands will NOT fail:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_m/select -d 'q=*:*=0=b_m'
> curl -XGET -s "localhost:8983/solr/b_s/select?q=*:*=0=b_s"
> {noformat}
> The first one is very similar to the one that fails, except it uses an alias 
> pointing at multiple collections instead of an alias pointing at one 
> collection.  The second is effectively identical to the command that fails, 
> except it's a GET rather than a POST -- the parameters are part of the URL 
> rather than being specified in the request body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12849) collection parameter referencing an alias being handled differently when sent as GET than when sent as POST

2018-10-10 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12849?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12849:

Summary: collection parameter referencing an alias being handled 
differently when sent as GET than when sent as POST  (was: collection parameter 
being handled differently when sent as GET than when sent as POST)

> collection parameter referencing an alias being handled differently when sent 
> as GET than when sent as POST
> ---
>
> Key: SOLR-12849
> URL: https://issues.apache.org/jira/browse/SOLR-12849
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.5
>Reporter: Shawn Heisey
>Priority: Major
>
> This is a weird one.
> Fired up a cloud example, server built from branch_7x code (7.6.0-SNAPSHOT)
> Created four collections with the _default configset -- foo, foo2, bar, bar2.
> Created four aliases, here's the contents of aliases.json:
> {noformat}
> {"collection":{
> "b_s":"bar",
> "b_m":"bar2,bar",
> "f_s":"foo",
> "f_m":"foo2,foo"}}
> {noformat}
> This curl command will fail, with "Could not find collection : b_s" as the 
> message:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_s/select -d 'q=*:*=0=b_s'
> {noformat}
> That seems like a bug, but it's not the whole story.  Here's where things get 
> weird.  The following two commands will NOT fail:
> {noformat}
> curl -XPOST -s localhost:8983/solr/b_m/select -d 'q=*:*=0=b_m'
> curl -XGET -s "localhost:8983/solr/b_s/select?q=*:*=0=b_s"
> {noformat}
> The first one is very similar to the one that fails, except it uses an alias 
> pointing at multiple collections instead of an alias pointing at one 
> collection.  The second is effectively identical to the command that fails, 
> except it's a GET rather than a POST -- the parameters are part of the URL 
> rather than being specified in the request body.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12620) Remove the Cloud -> Graph (Radial) view

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12620.

Resolution: Fixed

Done

> Remove the Cloud -> Graph (Radial) view
> ---
>
> Key: SOLR-12620
> URL: https://issues.apache.org/jira/browse/SOLR-12620
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12620.patch, SOLR-12620.patch
>
>
> Spinoff from SOLR-8207
> The radial view does not scale well and should be removed in 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12833) Use timed-out lock in DistributedUpdateProcessor

2018-10-10 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller reassigned SOLR-12833:
--

Assignee: Mark Miller

> Use timed-out lock in DistributedUpdateProcessor
> 
>
> Key: SOLR-12833
> URL: https://issues.apache.org/jira/browse/SOLR-12833
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update, UpdateRequestProcessors
>Affects Versions: 7.5, master (8.0)
>Reporter: jefferyyuan
>Assignee: Mark Miller
>Priority: Minor
> Fix For: master (8.0)
>
>
> There is a synchronize block that blocks other update requests whose IDs fall 
> in the same hash bucket. The update waits forever until it gets the lock at 
> the synchronize block, this can be a problem in some cases.
>  
> Some add/update requests (for example updates with spatial/shape analysis) 
> like may take time (30+ seconds or even more), this would the request time 
> out and fail.
> Client may retry the same requests multiple times or several minutes, this 
> would make things worse.
> The server side receives all the update requests but all except one can do 
> nothing, have to wait there. This wastes precious memory and cpu resource.
> We have seen the case 2000+ threads are blocking at the synchronize lock, and 
> only a few updates are making progress. Each thread takes 3+ mb memory which 
> causes OOM.
> Also if the update can't get the lock in expected time range, its better to 
> fail fast.
>  
> We can have one configuration in solrconfig.xml: 
> updateHandler/versionLock/timeInMill, so users can specify how long they want 
> to wait the version bucket lock.
> The default value can be -1, so it behaves same - wait forever until it gets 
> the lock.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 938 - Still unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/938/

9 tests failed.
FAILED:  
org.apache.lucene.codecs.lucene60.TestLucene60PointsFormat.testRandomBinaryMedium

Error Message:
max packed value has incorrect length=15 vs expected=12 for docID=-1 
field="field"

Stack Trace:
java.lang.RuntimeException: max packed value has incorrect length=15 vs 
expected=12 for docID=-1 field="field"
at 
__randomizedtesting.SeedInfo.seed([F7C415E57CE326E2:80E992A5A84F5335]:0)
at 
org.apache.lucene.index.CheckIndex$VerifyPointsVisitor.checkPackedValue(CheckIndex.java:2001)
at 
org.apache.lucene.index.CheckIndex$VerifyPointsVisitor.compare(CheckIndex.java:1958)
at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:746)
at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:787)
at org.apache.lucene.util.bkd.BKDReader.intersect(BKDReader.java:533)
at org.apache.lucene.index.CheckIndex.testPoints(CheckIndex.java:1810)
at org.apache.lucene.index.CheckIndex.checkIndex(CheckIndex.java:735)
at org.apache.lucene.util.TestUtil.checkIndex(TestUtil.java:300)
at 
org.apache.lucene.store.MockDirectoryWrapper.close(MockDirectoryWrapper.java:862)
at 
org.apache.lucene.index.BasePointsFormatTestCase.switchIndex(BasePointsFormatTestCase.java:1002)
at 
org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:726)
at 
org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:551)
at 
org.apache.lucene.index.BasePointsFormatTestCase.doTestRandomBinary(BasePointsFormatTestCase.java:539)
at 
org.apache.lucene.index.BasePointsFormatTestCase.testRandomBinaryMedium(BasePointsFormatTestCase.java:512)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (SOLR-12620) Remove the Cloud -> Graph (Radial) view

2018-10-10 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12620:
---
Attachment: SOLR-12620.patch

> Remove the Cloud -> Graph (Radial) view
> ---
>
> Key: SOLR-12620
> URL: https://issues.apache.org/jira/browse/SOLR-12620
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12620.patch, SOLR-12620.patch
>
>
> Spinoff from SOLR-8207
> The radial view does not scale well and should be removed in 8.0



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12799) Allow Authentication Plugins to easily intercept internode requests

2018-10-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645448#comment-16645448
 ] 

Jan Høydahl commented on SOLR-12799:


[~noble.paul] did you have a chance to QA this PR yet?

> Allow Authentication Plugins to easily intercept internode requests
> ---
>
> Key: SOLR-12799
> URL: https://issues.apache.org/jira/browse/SOLR-12799
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Solr security framework currently allows a plugin to declare statically by 
> implementing the {{HttpClientBuilderPlugin}} interface whether it will handle 
> internode requests. If it implements the interface, the plugin MUST handle 
> ALL internode requests, even requests originating from Solr itself. Likewise, 
> if a plugin does not implement the interface, ALL requests will be 
> authenticated by the built-in {{PKIAuthenticationPlugin}}.
> In some cases (such as SOLR-12121) there is a need to forward end-user 
> credentials on internode requests, but let PKI handle it for solr-originated 
> requests. This is currently not possible without a dirty hack where each 
> plugin duplicates some PKI logic and calls PKI plugin from its own 
> interceptor even if it is disabled.
> This Jira makes this use case officially supported by the framework by:
>  * Letting {{PKIAuthenticationPlugin}} be always enabled. PKI will now in its 
> interceptor on a per-request basis first give the authc plugin a chance to 
> handle the request
>  * Adding a protected method to abstract class {{AuthenticationPlugin}}
>{code:java}
> protected boolean interceptInternodeRequest(HttpRequest httpRequest, 
> HttpContext httpContext)
> {code}
> that can be overridden by plugins in order to easily intercept requests 
> without registering its own interceptor. Returning 'false' delegates to PKI.
> Existing Authc plugins do *not* need to change as a result of this, and they 
> will work exactly as before, i.e. either handle ALL or NONE internode auth.
> New plugins choosing to *override* the new {{interceptInternodeRequest}} 
> method will obtain per-request control over who will secure each request. The 
> first user of this feature will be JWT token based auth in SOLR-12121.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2891 - Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2891/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseG1GC

31 tests failed.
FAILED:  
org.apache.lucene.codecs.asserting.TestAssertingPointsFormat.testRandomBinaryMedium

Error Message:
this writer hit an unrecoverable error; cannot complete forceMerge

Stack Trace:
java.lang.IllegalStateException: this writer hit an unrecoverable error; cannot 
complete forceMerge
at 
__randomizedtesting.SeedInfo.seed([F280908F18AE1657:85AD17CFCC026380]:0)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:2010)
at org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1954)
at 
org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:456)
at 
org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:740)
at 
org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:551)
at 
org.apache.lucene.index.BasePointsFormatTestCase.doTestRandomBinary(BasePointsFormatTestCase.java:539)
at 
org.apache.lucene.index.BasePointsFormatTestCase.testRandomBinaryMedium(BasePointsFormatTestCase.java:512)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.AssertionError: bytesPerDim=9 splitDim=0 numIndexDims=3 
numDataDims=8
  

Re: Does ConcurrentMergeScheduler actually do smaller merges first?

2018-10-10 Thread Michael Sokolov
If maxMergeCount was 2, you could get into a situation with three large
merges I think; the largest would be paused, but the others could still
take > 10 mins to complete. Are you sure that your observation is at odds
with what the document says the scheduler is doing?

On Wed, Oct 10, 2018 at 2:28 AM Shawn Heisey  wrote:

> Before I open an issue, I would like to double-check my sanity, see if
> an issue is needed.
>
> I have noticed that the javadoc for ConcurrentMergeScheduler says that
> it schedules smaller merges before larger merges.  In the past, I have
> seen evidence suggesting this is not actually the case, that it prefers
> larger merges first.
>
>  background 
>
> When importing millions of rows from a database using Solr's dataimport
> handler, the index will be merged quite frequently while that indexing
> occurs.  Eventually, it reaches a point where there are multiple merges
> scheduled simultaneously, so the the ongoing indexing thread will be
> paused until the number of merges drops below maxMergeCount.
>
> If the smallest merge was being done first, then I don't think the
> observed behavior would be what happens.  What I would see happen in the
> past is that when a large merge gets scheduled, indexing is paused long
> enough for the database connection to time out and be disconnected, so
> when the import tries to resume indexing, it can't -- the source
> database connection is gone.  For MySQL databases, this timeout takes
> about ten minutes to happen. If the smallest merge had completed first,
> the count would have decreased long before the database connection could
> time out, and indexing would have resumed with no problems.
>
>  end background 
>
> The way that I have fixed this problem in the past is to increase
> maxMergeCount to 6.  When that's done, the incoming thread never gets
> paused, and the database connection doesn't time out.
>
> I can see that the default for maxMergeCount was changed from 2 to 6 in
> 2014 by LUCENE-6119.  So 5.0 and later probably might not have the
> problems I encountered as long as the scheduler is left at defaults ...
> but I suspect that the running order of merges goes larger to smaller,
> contrary to javadoc.  The code is pretty dense and I haven't completely
> deciphered it yet.
>
> Thanks,
> Shawn
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-9) - Build # 4870 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4870/
Java: 64bit/jdk-9 -XX:-UseCompressedOops -XX:+UseG1GC

9 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:55591_solr, 
127.0.0.1:55592_solr, 127.0.0.1:55593_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/14)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"raceDeleteReplica_true_shard1_replica_n2", 
  "base_url":"http://127.0.0.1:55594/solr;,   
"node_name":"127.0.0.1:55594_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"http://127.0.0.1:55594/solr;,   
"node_name":"127.0.0.1:55594_solr",   "state":"down",   
"type":"NRT"}, "core_node3":{   
"core":"raceDeleteReplica_true_shard1_replica_n1",   
"base_url":"http://127.0.0.1:55592/solr;,   
"node_name":"127.0.0.1:55592_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:55591_solr, 127.0.0.1:55592_solr, 127.0.0.1:55593_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/14)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"http://127.0.0.1:55594/solr;,
  "node_name":"127.0.0.1:55594_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"http://127.0.0.1:55594/solr;,
  "node_name":"127.0.0.1:55594_solr",
  "state":"down",
  "type":"NRT"},
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"http://127.0.0.1:55592/solr;,
  "node_name":"127.0.0.1:55592_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([B8038518269CFD83:D215E4C84E6EB749]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:222)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Commented] (SOLR-12791) Add Metrics reporting for AuthenticationPlugin

2018-10-10 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645288#comment-16645288
 ] 

Jan Høydahl commented on SOLR-12791:


I have incorporated some changes based on first review. [~ichattopadhyaya] do 
you want to add your comments as well?

> Add Metrics reporting for AuthenticationPlugin
> --
>
> Key: SOLR-12791
> URL: https://issues.apache.org/jira/browse/SOLR-12791
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> Propose to add Metrics support for all Auth plugins. Will let abstract 
> {{AuthenticationPlugin}} base class implement {{SolrMetricProducer}} and keep 
> the counters, such as:
>  * requests
>  * req authenticated
>  * req pass-through (no credentials and blockUnknown false)
>  * req with auth failures due to wrong or malformed credentials
>  * req auth failures due to missing credentials
>  * errors
>  * timeouts
>  * timing stats
> Each implementation still needs to increment the counters etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-10-10 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645266#comment-16645266
 ] 

Nicholas Knize commented on LUCENE-8496:


I went ahead and reverted this feature from branch_7x until the backport can be 
cleaned up. Sorry for the noise.

> Explore selective dimension indexing in BKDReader/Writer
> 
>
> Key: LUCENE-8496
> URL: https://issues.apache.org/jira/browse/LUCENE-8496
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8496.patch, LUCENE-8496.patch, LUCENE-8496.patch, 
> LUCENE-8496.patch, LUCENE-8496.patch, LatLonShape_SelectiveEncoding.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This issue explores adding a new feature to BKDReader/Writer that enables 
> users to select a fewer number of dimensions to be used for creating the BKD 
> index than the total number of dimensions specified for field encoding. This 
> is useful for encoding dimensional data that is used for interpreting the 
> encoded field data but unnecessary (or not efficient) for creating the index 
> structure. One such example is {{LatLonShape}} encoding. The first 4 
> dimensions may be used to to efficiently search/index the triangle using its 
> precomputed bounding box as a 4D point, and the remaining dimensions can be 
> used to encode the vertices of the tessellated triangle. This causes BKD to 
> act much like an R-Tree for shape data where search is distilled into a 4D 
> point (instead of a more expensive 6D point) and the triangle is encoded 
> using a portion of the remaining (non-indexed) dimensions. Fields that use 
> the full data range for indexing are not impacted and behave as they normally 
> would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1660 - Still Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1660/

[...truncated 39 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/937/consoleText

[repro] Revision: 859559a383bf4834e7726ae48f05b04ad53ae242

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testRouting -Dtests.seed=49835F707737C5C -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=fr-CA -Dtests.timezone=America/Puerto_Rico 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
4cfa876d9d269ed11be8e6668d2de31690150915
[repro] git fetch
[repro] git checkout 859559a383bf4834e7726ae48f05b04ad53ae242

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2573 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=49835F707737C5C -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=fr-CA -Dtests.timezone=America/Puerto_Rico -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 910 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 4cfa876d9d269ed11be8e6668d2de31690150915

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 1659 - Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1659/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1663/consoleText

[repro] Revision: 3629e760113d8faa4b544bffafa3e8b33d2eb404

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest 
-Dtests.method=moveReplicaTest -Dtests.seed=640F4B7793C9D7D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ko-KR -Dtests.timezone=MIT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsCollectionsAPIDistributedZkTest 
-Dtests.method=testCoresAreDistributedAcrossNodes -Dtests.seed=640F4B7793C9D7D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ko-KR -Dtests.timezone=MIT -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=raceConditionOnDeleteAndRegisterReplica 
-Dtests.seed=640F4B7793C9D7D3 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-BE -Dtests.timezone=Mexico/BajaNorte -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=DeleteReplicaTest 
-Dtests.method=deleteReplicaByCountForAllShards -Dtests.seed=640F4B7793C9D7D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=fr-BE -Dtests.timezone=Mexico/BajaNorte -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testSeveralReplicasInLIR -Dtests.seed=640F4B7793C9D7D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mt-MT -Dtests.timezone=Asia/Kathmandu -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=LIROnShardRestartTest 
-Dtests.method=testAllReplicasInLIR -Dtests.seed=640F4B7793C9D7D3 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=mt-MT -Dtests.timezone=Asia/Kathmandu -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=AutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=640F4B7793C9D7D3 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-OM -Dtests.timezone=America/Montserrat -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsAutoAddReplicasIntegrationTest 
-Dtests.method=testSimple -Dtests.seed=640F4B7793C9D7D3 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=de-GR -Dtests.timezone=Zulu -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testNonRetryableRequests -Dtests.seed=B3150F4488008B9A 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-PR -Dtests.timezone=Asia/Oral -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
4cfa876d9d269ed11be8e6668d2de31690150915
[repro] git fetch
[repro] git checkout 3629e760113d8faa4b544bffafa3e8b33d2eb404

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro]solr/core
[repro]   HdfsCollectionsAPIDistributedZkTest
[repro]   AutoAddReplicasIntegrationTest
[repro]   HdfsAutoAddReplicasIntegrationTest
[repro]   LIROnShardRestartTest
[repro]   DeleteReplicaTest
[repro] ant compile-test

[...truncated 2560 

[jira] [Commented] (LUCENE-8496) Explore selective dimension indexing in BKDReader/Writer

2018-10-10 Thread Nicholas Knize (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645246#comment-16645246
 ] 

Nicholas Knize commented on LUCENE-8496:


Failure on branch_7x: 
{{ant test  -Dtestcase=TestBKD -Dtests.seed=3A807E1398CE4499 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=sr-Latn-BA -Dtests.timezone=Africa/Malabo 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII}}

Muting test until fix is pushed.

> Explore selective dimension indexing in BKDReader/Writer
> 
>
> Key: LUCENE-8496
> URL: https://issues.apache.org/jira/browse/LUCENE-8496
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Nicholas Knize
>Priority: Major
> Attachments: LUCENE-8496.patch, LUCENE-8496.patch, LUCENE-8496.patch, 
> LUCENE-8496.patch, LUCENE-8496.patch, LatLonShape_SelectiveEncoding.patch
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> This issue explores adding a new feature to BKDReader/Writer that enables 
> users to select a fewer number of dimensions to be used for creating the BKD 
> index than the total number of dimensions specified for field encoding. This 
> is useful for encoding dimensional data that is used for interpreting the 
> encoded field data but unnecessary (or not efficient) for creating the index 
> structure. One such example is {{LatLonShape}} encoding. The first 4 
> dimensions may be used to to efficiently search/index the triangle using its 
> precomputed bounding box as a 4D point, and the remaining dimensions can be 
> used to encode the vertices of the tessellated triangle. This causes BKD to 
> act much like an R-Tree for shape data where search is distilled into a 4D 
> point (instead of a more expensive 6D point) and the triangle is encoded 
> using a portion of the remaining (non-indexed) dimensions. Fields that use 
> the full data range for indexing are not impacted and behave as they normally 
> would.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 23006 - Still Failing!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23006/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

2 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:32799_solr, 
127.0.0.1:33319_solr, 127.0.0.1:34273_solr] Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_true_shard1_replica_n1", 
  "base_url":"http://127.0.0.1:39561/solr;,   
"node_name":"127.0.0.1:39561_solr",   "state":"down",   
"type":"NRT",   "leader":"true"}, "core_node6":{   
"core":"raceDeleteReplica_true_shard1_replica_n5",   
"base_url":"http://127.0.0.1:39561/solr;,   
"node_name":"127.0.0.1:39561_solr",   "state":"down",   
"type":"NRT"}, "core_node4":{   
"core":"raceDeleteReplica_true_shard1_replica_n2",   
"base_url":"http://127.0.0.1:32799/solr;,   
"node_name":"127.0.0.1:32799_solr",   "state":"down",   
"type":"NRT",   "router":{"name":"compositeId"},   "maxShardsPerNode":"1",  
 "autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:32799_solr, 127.0.0.1:33319_solr, 127.0.0.1:34273_solr]
Last available state: 
DocCollection(raceDeleteReplica_true//collections/raceDeleteReplica_true/state.json/12)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_true_shard1_replica_n1",
  "base_url":"http://127.0.0.1:39561/solr;,
  "node_name":"127.0.0.1:39561_solr",
  "state":"down",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_true_shard1_replica_n5",
  "base_url":"http://127.0.0.1:39561/solr;,
  "node_name":"127.0.0.1:39561_solr",
  "state":"down",
  "type":"NRT"},
"core_node4":{
  "core":"raceDeleteReplica_true_shard1_replica_n2",
  "base_url":"http://127.0.0.1:32799/solr;,
  "node_name":"127.0.0.1:32799_solr",
  "state":"down",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([FB0B3527AFA5093E:911D54F7C75743F4]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:222)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2863 - Still Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2863/

7 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:35061_solr, 
127.0.0.1:36319_solr, 127.0.0.1:37692_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:44688/solr;,   
"node_name":"127.0.0.1:44688_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:44688/solr;,   
"node_name":"127.0.0.1:44688_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:35061_solr, 127.0.0.1:36319_solr, 127.0.0.1:37692_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:44688/solr;,
  "node_name":"127.0.0.1:44688_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:44688/solr;,
  "node_name":"127.0.0.1:44688_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([62B5F353557AC082:8A392833D888A48]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[jira] [Commented] (SOLR-8868) SolrCloud: if zookeeper loses and then regains a quorum, Solr nodes and SolrJ Client do not recover and need to be restarted

2018-10-10 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645188#comment-16645188
 ] 

Erick Erickson commented on SOLR-8868:
--

not entirely sure upgrading ZooKeeper would fix this, but at least worth 
checking if we do.

 

I had some trouble when I tried upgrading ZK but I haven't had a chance to 
pursue it yet.

> SolrCloud: if zookeeper loses and then regains a quorum, Solr nodes and SolrJ 
> Client do not recover and need to be restarted
> 
>
> Key: SOLR-8868
> URL: https://issues.apache.org/jira/browse/SOLR-8868
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 5.3.1
>Reporter: Frank J Kelly
>Priority: Major
>
> Tried mailing list on 3/15 and 3/16 to no avail. Hopefully I gave enough 
> details.
> 
> Just wondering if my observation of SolrCloud behavior after ZooKeeper loses 
> a quorum is normal or to-be-expected
> Version of Solr: 5.3.1
> Version of ZooKeeper: 3.4.7
> Using SolrCloud with external ZooKeeper
> Deployed on AWS
> Our Solr cluster has 3 nodes (m3.large)
> Our Zookeeper ensemble consists of three nodes (t2.small) with the same 
> config using DNS names e.g.
> {noformat}
> $ more ../conf/zoo.cfg
> tickTime=2000
> dataDir=/var/zookeeper
> dataLogDir=/var/log/zookeeper
> clientPort=2181
> initLimit=10
> syncLimit=5
> standaloneEnabled=false
> server.1=zookeeper1.qa.eu-west-1.mysearch.com:2888:3888
> server.2=zookeeper2.qa.eu-west-1.mysearch.com:2888:3888
> server.3=zookeeper3.qa.eu-west-1.mysearch.com:2888:3888
> {noformat}
> If we terminate one of the zookeeper nodes we get a ZK election (and I think) 
> a quorum is maintained.
> Operation continues OK and we detect the terminated instance and relaunch a 
> new ZK node which comes up fine
> If we terminate two of the ZK nodes we lose a quorum and then we observe the 
> following
> 1.1) Admin UI shows an error that it is unable to contact ZooKeeper “Could 
> not connect to ZooKeeper"
> 1.2) SolrJ returns the following
> {noformat}
> org.apache.solr.common.SolrException: Could not load collection from 
> ZK:qa_eu-west-1_public_index
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
> at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:837)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
> at 
> com.here.scbe.search.solr.SolrFacadeImpl.addToSearchIndex(SolrFacadeImpl.java:112)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /collections/qa_eu-west-1_public_index/state.json
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
> at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
> at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:841)
> ... 24 more
> {noformat}
> This makes sense based on our understanding.
> When our AutoScale groups launch two new ZooKeeper nodes, initialize them, 
> fix the DNS etc. we regain a quorum but at this point
> 2.1) Admin UI shows the shards as “GONE” (all greyed out)
> 2.2) SolrJ returns the same error even though the ZooKeeper DNS names are now 
> bound to new IP addresses
> So at this point I restart the Solr nodes. At this point then
> 3.1) Admin UI shows the collections as OK (all shards are green) – yeah the 
> nodes are back!
> 3.2) SolrJ Client still shows the same error – namely
> {noformat}
> org.apache.solr.common.SolrException: Could not load collection from 
> ZK:qa_eu-west-1_here_account
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
> at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
> at 
> 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 182 - Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/182/

2 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at 
https://127.0.0.1:35430/solr/collection1_shard2_replica_n3: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n3/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n3/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:35430/solr/collection1_shard2_replica_n3: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n3/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n3/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n3/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([4E95CC860476DB72:8C22F0EE07362B0A]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

[jira] [Commented] (SOLR-10981) Allow update to load gzip files

2018-10-10 Thread Andrew Lundgren (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645177#comment-16645177
 ] 

Andrew Lundgren commented on SOLR-10981:


Any other thoughts or suggestions on this?  Anything blocking acceptance now?

> Allow update to load gzip files 
> 
>
> Key: SOLR-10981
> URL: https://issues.apache.org/jira/browse/SOLR-10981
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 6.6
>Reporter: Andrew Lundgren
>Priority: Major
>  Labels: patch
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch, 
> SOLR-10981.patch, SOLR-10981.patch, SOLR-10981.patch
>
>
> We currently import large CSV files. We store them in gzip files as they 
> compress at around 80%.
> To import them we must gunzip them and then import them. After that we no 
> longer need the decompressed files.
> This patch allows directly opening either URL, or local files that are 
> gzipped.
> For URLs, to determine if the file is gzipped, it will check the content 
> encoding=="gzip" or if the file ends in ".gz"
> For files, if the file ends in ".gz" then it will assume the file is gzipped.
> I have tested the patch with 4.10.4, 6.6.0, 7.0.1 and master from git.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7996) Should we require positive scores?

2018-10-10 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645147#comment-16645147
 ] 

Yonik Seeley commented on LUCENE-7996:
--

bq. WAND and other optimizations were the reason why I opened this issue and 
moved it forward

I understand why we wouldn't want to produce negative scores by default, as 
that would complicate or prevent such optimizations by default.
What I don't understand is what we gain by prohibiting negative scores across 
the board.  We can only do these optimizations in certain cases anyway, so we 
don't gain anything by prohibiting a function query (for example) from 
producing negative values.  This would seem to limit the use cases without any 
corresponding gain in optimization opportunities.


> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7996.patch, LUCENE-7996.patch, LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #457: SOLR-12791: Add Metrics reporting for Authent...

2018-10-10 Thread janhoy
Github user janhoy commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/457#discussion_r224121448
  
--- Diff: 
solr/core/src/java/org/apache/solr/security/AuthenticationPlugin.java ---
@@ -52,11 +80,73 @@
   public abstract boolean doAuthenticate(ServletRequest request, 
ServletResponse response,
   FilterChain filterChain) throws Exception;
 
+  /**
+   * This method is called by SolrDispatchFilter in order to initiate 
authentication.
+   * It does some standard metrics counting.
+   */
+  public final boolean authenticate(ServletRequest request, 
ServletResponse response, FilterChain filterChain) throws Exception {
+Timer.Context timer = requestTimes.time();
+requests.inc();
+try {
+  return doAuthenticate(request, response, filterChain);
+} catch(Exception e) {
+  numErrors.mark();
+  throw e;
+} finally {
+  long elapsed = timer.stop();
+  totalTime.inc(elapsed);
+}
+  }
 
   /**
* Cleanup any per request  data
*/
   public void closeRequest() {
   }
 
+  @Override
+  public void initializeMetrics(SolrMetricManager manager, String 
registryName, String tag, final String scope) {
+this.metricManager = manager;
+this.registryName = registryName;
+// Metrics
+registry = manager.registry(registryName);
+numErrors = manager.meter(this, registryName, "errors", 
getCategory().toString(), scope);
+numTimeouts = manager.meter(this, registryName, "timeouts", 
getCategory().toString(), scope);
+requests = manager.counter(this, registryName, "requests", 
getCategory().toString(), scope);
+numAuthenticated = manager.counter(this, registryName, 
"authenticated", getCategory().toString(), scope);
+numPassThrough = manager.counter(this, registryName, "passThrough", 
getCategory().toString(), scope);
+numWrongCredentials = manager.counter(this, registryName, 
"failWrongCredentials", getCategory().toString(), scope);
+numInvalidCredentials = manager.counter(this, registryName, 
"failInvalidCredentials", getCategory().toString(), scope);
+numMissingCredentials = manager.counter(this, registryName, 
"failMissingCredentials", getCategory().toString(), scope);
+requestTimes = manager.timer(this, registryName, "requestTimes", 
getCategory().toString(), scope);
+totalTime = manager.counter(this, registryName, "totalTime", 
getCategory().toString(), scope);
+metricNames.addAll(Arrays.asList("errors", "timeouts", "requests", 
"authenticated", "passThrough",
+"failWrongCredentials", "failMissingCredentials", 
"failInvalidCredentials", "requestTimes", "totalTime"));
+  }
+  
+  @Override
+  public String getName() {
+return this.getClass().getName();
+  }
+
+  @Override
+  public String getDescription() {
+return this.getClass().getName();
--- End diff --

Done


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #457: SOLR-12791: Add Metrics reporting for Authent...

2018-10-10 Thread janhoy
Github user janhoy commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/457#discussion_r224121574
  
--- Diff: 
solr/core/src/java/org/apache/solr/security/AuthenticationPlugin.java ---
@@ -52,11 +80,73 @@
   public abstract boolean doAuthenticate(ServletRequest request, 
ServletResponse response,
   FilterChain filterChain) throws Exception;
 
+  /**
+   * This method is called by SolrDispatchFilter in order to initiate 
authentication.
+   * It does some standard metrics counting.
+   */
+  public final boolean authenticate(ServletRequest request, 
ServletResponse response, FilterChain filterChain) throws Exception {
+Timer.Context timer = requestTimes.time();
+requests.inc();
+try {
+  return doAuthenticate(request, response, filterChain);
+} catch(Exception e) {
+  numErrors.mark();
+  throw e;
+} finally {
+  long elapsed = timer.stop();
+  totalTime.inc(elapsed);
+}
+  }
 
   /**
* Cleanup any per request  data
*/
   public void closeRequest() {
   }
 
+  @Override
+  public void initializeMetrics(SolrMetricManager manager, String 
registryName, String tag, final String scope) {
+this.metricManager = manager;
+this.registryName = registryName;
+// Metrics
+registry = manager.registry(registryName);
+numErrors = manager.meter(this, registryName, "errors", 
getCategory().toString(), scope);
+numTimeouts = manager.meter(this, registryName, "timeouts", 
getCategory().toString(), scope);
+requests = manager.counter(this, registryName, "requests", 
getCategory().toString(), scope);
+numAuthenticated = manager.counter(this, registryName, 
"authenticated", getCategory().toString(), scope);
+numPassThrough = manager.counter(this, registryName, "passThrough", 
getCategory().toString(), scope);
+numWrongCredentials = manager.counter(this, registryName, 
"failWrongCredentials", getCategory().toString(), scope);
--- End diff --

Done


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 829 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/829/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

11 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue

Error Message:
action wasn't interrupted

Stack Trace:
java.lang.AssertionError: action wasn't interrupted
at 
__randomizedtesting.SeedInfo.seed([1180252F246882B2:D83567812D0F4447]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testEventQueue(TestSimTriggerIntegration.java:683)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  
junit.framework.TestSuite.org.apache.solr.rest.TestManagedResourceStorage

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([1180252F246882B2]:0)
at 

[jira] [Commented] (SOLR-8868) SolrCloud: if zookeeper loses and then regains a quorum, Solr nodes and SolrJ Client do not recover and need to be restarted

2018-10-10 Thread Michael (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645105#comment-16645105
 ] 

Michael commented on SOLR-8868:
---

Same issue on SolrCloud 7.3.1 on Kubernetes.

> SolrCloud: if zookeeper loses and then regains a quorum, Solr nodes and SolrJ 
> Client do not recover and need to be restarted
> 
>
> Key: SOLR-8868
> URL: https://issues.apache.org/jira/browse/SOLR-8868
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud, SolrJ
>Affects Versions: 5.3.1
>Reporter: Frank J Kelly
>Priority: Major
>
> Tried mailing list on 3/15 and 3/16 to no avail. Hopefully I gave enough 
> details.
> 
> Just wondering if my observation of SolrCloud behavior after ZooKeeper loses 
> a quorum is normal or to-be-expected
> Version of Solr: 5.3.1
> Version of ZooKeeper: 3.4.7
> Using SolrCloud with external ZooKeeper
> Deployed on AWS
> Our Solr cluster has 3 nodes (m3.large)
> Our Zookeeper ensemble consists of three nodes (t2.small) with the same 
> config using DNS names e.g.
> {noformat}
> $ more ../conf/zoo.cfg
> tickTime=2000
> dataDir=/var/zookeeper
> dataLogDir=/var/log/zookeeper
> clientPort=2181
> initLimit=10
> syncLimit=5
> standaloneEnabled=false
> server.1=zookeeper1.qa.eu-west-1.mysearch.com:2888:3888
> server.2=zookeeper2.qa.eu-west-1.mysearch.com:2888:3888
> server.3=zookeeper3.qa.eu-west-1.mysearch.com:2888:3888
> {noformat}
> If we terminate one of the zookeeper nodes we get a ZK election (and I think) 
> a quorum is maintained.
> Operation continues OK and we detect the terminated instance and relaunch a 
> new ZK node which comes up fine
> If we terminate two of the ZK nodes we lose a quorum and then we observe the 
> following
> 1.1) Admin UI shows an error that it is unable to contact ZooKeeper “Could 
> not connect to ZooKeeper"
> 1.2) SolrJ returns the following
> {noformat}
> org.apache.solr.common.SolrException: Could not load collection from 
> ZK:qa_eu-west-1_public_index
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
> at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:837)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:805)
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:107)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:72)
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:86)
> at 
> com.here.scbe.search.solr.SolrFacadeImpl.addToSearchIndex(SolrFacadeImpl.java:112)
> Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
> KeeperErrorCode = ConnectionLoss for 
> /collections/qa_eu-west-1_public_index/state.json
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
> at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:345)
> at org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:342)
> at 
> org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61)
> at org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:342)
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:841)
> ... 24 more
> {noformat}
> This makes sense based on our understanding.
> When our AutoScale groups launch two new ZooKeeper nodes, initialize them, 
> fix the DNS etc. we regain a quorum but at this point
> 2.1) Admin UI shows the shards as “GONE” (all greyed out)
> 2.2) SolrJ returns the same error even though the ZooKeeper DNS names are now 
> bound to new IP addresses
> So at this point I restart the Solr nodes. At this point then
> 3.1) Admin UI shows the collections as OK (all shards are green) – yeah the 
> nodes are back!
> 3.2) SolrJ Client still shows the same error – namely
> {noformat}
> org.apache.solr.common.SolrException: Could not load collection from 
> ZK:qa_eu-west-1_here_account
> at 
> org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:850)
> at org.apache.solr.common.cloud.ZkStateReader$7.get(ZkStateReader.java:515)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1205)
> at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:837)
> at 
> 

[DISCUSS] Moving from Ant build to Gradle

2018-10-10 Thread Đạt Cao Mạnh
Hi all,

Recently I wanted to create another module in Solr to group all common
dependencies of Server and Solrj module. It seems that to do such kind of
thing is very painful, including hacks and adding support for different ide
and maven. Should we consider on moving to Gradle which seems better and
standard nowadays?

Thanks!
Dat


[jira] [Updated] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12848:

Attachment: SOLR-12848.patch

> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch, SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645067#comment-16645067
 ] 

Shawn Heisey commented on SOLR-12848:
-

Oops.  I've made a mistake there.  Will attach a new patch.


> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645063#comment-16645063
 ] 

Shawn Heisey commented on SOLR-12848:
-

After a quick examination of the code, I think the attached patch would fix 
this problem.


> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12848:

Attachment: SOLR-12848.patch

> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
> Attachments: SOLR-12848.patch
>
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7996) Should we require positive scores?

2018-10-10 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645052#comment-16645052
 ] 

Adrien Grand commented on LUCENE-7996:
--

WAND and other optimizations were the reason why I opened this issue and moved 
it forward, though Robert made a good point in his initial comment on this 
issue that negative scores are not only an implementation problem for WAND and 
other optimizations, it also hurts relevance.

Doug mentioned that preventing negative scores might be an issue for LTR, but 
I've come to think that these optimizations are actually interesting for 
learning to rank since simple models could leverage these optimizations. If 
they use a reasonable number of features (using FeatureField) and combine 
scores via a linear combination, then the resulting query could be a boolean 
query that efficiently skips irrelevant documents.

> Should we require positive scores?
> --
>
> Key: LUCENE-7996
> URL: https://issues.apache.org/jira/browse/LUCENE-7996
> Project: Lucene - Core
>  Issue Type: Wish
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: LUCENE-7996.patch, LUCENE-7996.patch, LUCENE-7996.patch
>
>
> Having worked on MAXSCORE recently, things would be simpler if we required 
> that scores are positive. Practically, this would mean 
>  - forbidding/fixing similarities that may produce negative scores (we have 
> some of them)
>  - forbidding things like negative boosts
> So I'd be curious to have opinions whether this would be a sane requirement 
> or whether we need to be able to cope with negative scores eg. because some 
> similarities that we want to support produce negative scores by design.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12848?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16645044#comment-16645044
 ] 

Shawn Heisey commented on SOLR-12848:
-

My thought is to have HttpClient always honor the appropriate system 
properties, like we did before when we used SystemDefaultHttpClient, rather 
than adding another configuration variable and method to our builders.  Any 
objections?


> SolrJ does not use HTTP proxy anymore
> -
>
> Key: SOLR-12848
> URL: https://issues.apache.org/jira/browse/SOLR-12848
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.5
>Reporter: Andreas Hubold
>Priority: Major
>  Labels: httpclient
>
> SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
> properties http.proxyHost and http.proxyPort. This used to work with Solr 
> 6.6.5.
> Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
> hood, which took system properties for HTTP proxy config into account. The 
> deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
> SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
> an HttpClient, but it does not call #useSystemProperties on the builder. 
> Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-master #2378: POMs out of sync

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2378/

No tests ran.

Build Log:
[...truncated 17795 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:672: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:404:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:650:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/8.0.0-SNAPSHOT/lucene-solr-grandparent-8.0.0-20181010.135252-239.pom.
 Return code is: 401

Total time: 7 minutes 13 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS-MAVEN] Lucene-Solr-Maven-7.x #327: POMs out of sync

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-7.x/327/

No tests ran.

Build Log:
[...truncated 17822 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:672: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/build.xml:404: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-7.x/lucene/common-build.xml:650:
 Error deploying artifact 'org.apache.lucene:lucene-solr-grandparent:pom': 
Error deploying artifact: Failed to transfer file: 
https://repository.apache.org/content/repositories/snapshots/org/apache/lucene/lucene-solr-grandparent/7.6.0-SNAPSHOT/lucene-solr-grandparent-7.6.0-20181010.134519-20.pom.
 Return code is: 401

Total time: 7 minutes 24 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-repro - Build # 1657 - Still Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1657/

[...truncated 37 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2862/consoleText

[repro] Revision: 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb

[repro] Repro line:  ant test  -Dtestcase=CloudSolrClientTest 
-Dtests.method=testNonRetryableRequests -Dtests.seed=4C103E0F35EA6A68 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=de-CH 
-Dtests.timezone=Europe/Samara -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
095707d54717a745245fd2702779e02d8a46e9ce
[repro] git fetch
[repro] git checkout 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/solrj
[repro]   CloudSolrClientTest
[repro] ant compile-test

[...truncated 2560 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CloudSolrClientTest" -Dtests.showOutput=onerror  
-Dtests.seed=4C103E0F35EA6A68 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=de-CH -Dtests.timezone=Europe/Samara -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 977 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   4/5 failed: org.apache.solr.client.solrj.impl.CloudSolrClientTest
[repro] git checkout 095707d54717a745245fd2702779e02d8a46e9ce

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Windows (64bit/jdk1.8.0_172) - Build # 7561 - Still Failing!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7561/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue.testDistributedQueueBlocking

Error Message:


Stack Trace:
java.util.concurrent.TimeoutException
at 
__randomizedtesting.SeedInfo.seed([3AB9F99A5713A5C1:7F138BE3134119B5]:0)
at java.util.concurrent.FutureTask.get(FutureTask.java:205)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimDistributedQueue.testDistributedQueueBlocking(TestSimDistributedQueue.java:102)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.autoscaling.sim.TestSimGenericDistributedQueue: 1) 
Thread[id=11223, name=sdqtest--2894-thread-1, state=TIMED_WAITING, 

[jira] [Created] (SOLR-12848) SolrJ does not use HTTP proxy anymore

2018-10-10 Thread Andreas Hubold (JIRA)
Andreas Hubold created SOLR-12848:
-

 Summary: SolrJ does not use HTTP proxy anymore
 Key: SOLR-12848
 URL: https://issues.apache.org/jira/browse/SOLR-12848
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrJ
Affects Versions: 7.5
Reporter: Andreas Hubold


SolrJ's HttpSolrClient ignores the HTTP proxy configuration from system 
properties http.proxyHost and http.proxyPort. This used to work with Solr 6.6.5.

Solr 6.6.5 used org.apache.http.impl.client.SystemDefaultHttpClient under the 
hood, which took system properties for HTTP proxy config into account. The 
deprecated SystemDefaultHttpClient class was replaced as part of SOLR-4509. 
SolrJ now uses org.apache.http.impl.client.HttpClientBuilder#create to create 
an HttpClient, but it does not call #useSystemProperties on the builder. 
Because of that, the proxy configuration from system properties is ignored.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1656 - Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1656/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/176/consoleText

[repro] Revision: 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb

[repro] Repro line:  ant test  -Dtestcase=CheckHdfsIndexTest 
-Dtests.method=testChecksumsOnlyVerbose -Dtests.seed=A2D9AE70B998423F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=en-SG -Dtests.timezone=America/Puerto_Rico -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=CheckHdfsIndexTest 
-Dtests.seed=A2D9AE70B998423F -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=en-SG 
-Dtests.timezone=America/Puerto_Rico -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=A2D9AE70B998423F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-QA -Dtests.timezone=America/Rankin_Inlet -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestSolrConfigHandlerCloud 
-Dtests.method=test -Dtests.seed=A2D9AE70B998423F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=id-ID 
-Dtests.timezone=Pacific/Norfolk -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=TestSimPolicyCloud 
-Dtests.method=testCreateCollectionAddReplica -Dtests.seed=A2D9AE70B998423F 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=es-UY -Dtests.timezone=Asia/Saigon -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
095707d54717a745245fd2702779e02d8a46e9ce
[repro] git fetch
[repro] git checkout 8d205ecd1c6a133f7cb9a4352388ec30d00b4bdb

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimPolicyCloud
[repro]   TestSimTriggerIntegration
[repro]   CheckHdfsIndexTest
[repro]   TestSolrConfigHandlerCloud
[repro] ant compile-test

[...truncated 3424 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestSimPolicyCloud|*.TestSimTriggerIntegration|*.CheckHdfsIndexTest|*.TestSolrConfigHandlerCloud"
 -Dtests.showOutput=onerror  -Dtests.seed=A2D9AE70B998423F -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=es-UY 
-Dtests.timezone=Asia/Saigon -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 10631 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud
[repro]   0/5 failed: org.apache.solr.handler.TestSolrConfigHandlerCloud
[repro]   0/5 failed: org.apache.solr.index.hdfs.CheckHdfsIndexTest
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro] git checkout 095707d54717a745245fd2702779e02d8a46e9ce

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 7 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 23005 - Failure!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23005/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseParallelGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:36309_solr, 
127.0.0.1:38693_solr, 127.0.0.1:46293_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"raceDeleteReplica_false_shard1_replica_n1",
   "base_url":"https://127.0.0.1:46411/solr;,   
"node_name":"127.0.0.1:46411_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46411/solr;,   
"node_name":"127.0.0.1:46411_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:36309_solr, 127.0.0.1:38693_solr, 127.0.0.1:46293_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"raceDeleteReplica_false_shard1_replica_n1",
  "base_url":"https://127.0.0.1:46411/solr;,
  "node_name":"127.0.0.1:46411_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46411/solr;,
  "node_name":"127.0.0.1:46411_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([291C4CA0323940F0:430A2D705ACB0A3A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 342 - Failure

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/342/

11 tests failed.
FAILED:  org.apache.lucene.document.TestLatLonLineShapeQueries.testRandomBig

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([1EECD2878E39E4FB]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.document.TestLatLonLineShapeQueries

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([1EECD2878E39E4FB]:0)


FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCoresAreDistributedAcrossNodes

Error Message:
[127.0.0.1:39864_solr, 127.0.0.1:33769_solr] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: [127.0.0.1:39864_solr, 127.0.0.1:33769_solr] 
expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([F44FD0660F7C220D:900F050AB4B2AA1C]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCoresAreDistributedAcrossNodes(CollectionsAPIDistributedZkTest.java:351)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 937 - Failure

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/937/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting

Error Message:
Error from server at 
https://127.0.0.1:36861/solr/collection1_shard2_replica_n2: Expected mime type 
application/octet-stream but got text/html.Error 404 
Can not find: /solr/collection1_shard2_replica_n2/update  
HTTP ERROR 404 Problem accessing 
/solr/collection1_shard2_replica_n2/update. Reason: Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605  
  

Stack Trace:
org.apache.solr.client.solrj.impl.CloudSolrClient$RouteException: Error from 
server at https://127.0.0.1:36861/solr/collection1_shard2_replica_n2: Expected 
mime type application/octet-stream but got text/html. 


Error 404 Can not find: 
/solr/collection1_shard2_replica_n2/update

HTTP ERROR 404
Problem accessing /solr/collection1_shard2_replica_n2/update. Reason:
Can not find: 
/solr/collection1_shard2_replica_n2/updatehttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.11.v20180605




at 
__randomizedtesting.SeedInfo.seed([49835F707737C5C:C62F099F04338C24]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:551)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1016)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:949)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.impl.CloudSolrClientTest.testRouting(CloudSolrClientTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)

[jira] [Assigned] (LUCENE-8523) Fix typo for JapaneseNumberFilterFactory usage

2018-10-10 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reassigned LUCENE-8523:
-

Assignee: Alan Woodward

> Fix typo for JapaneseNumberFilterFactory usage
> --
>
> Key: LUCENE-8523
> URL: https://issues.apache.org/jira/browse/LUCENE-8523
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: ankush jhalani
>Assignee: Alan Woodward
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Javadocs for JapaneseNumberFilterFactory have a typo - 
> [https://lucene.apache.org/core/7_5_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseNumberFilterFactory.html]
> Instead of 
> 
> We should have 
> 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8523) Fix typo for JapaneseNumberFilterFactory usage

2018-10-10 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644910#comment-16644910
 ] 

Alan Woodward commented on LUCENE-8523:
---

Thanks for raising the issue!
LGTM.  I'm travelling currently but will commit in a few days time.

> Fix typo for JapaneseNumberFilterFactory usage
> --
>
> Key: LUCENE-8523
> URL: https://issues.apache.org/jira/browse/LUCENE-8523
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: ankush jhalani
>Assignee: Alan Woodward
>Priority: Minor
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Javadocs for JapaneseNumberFilterFactory have a typo - 
> [https://lucene.apache.org/core/7_5_0/analyzers-kuromoji/org/apache/lucene/analysis/ja/JapaneseNumberFilterFactory.html]
> Instead of 
> 
> We should have 
> 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644872#comment-16644872
 ] 

Shalin Shekhar Mangar commented on SOLR-12739:
--

{code}
SOLR-12739: Release the policy session as soon as we're done with the 
computation.

This fixes the 
CollectionsAPIDistributedZkTest.testCoresAreDistributedAcrossNodes test 
failures. Due to the various tests for exceptional conditions, there were times 
where the session was not released causing stale data to remain in the policy 
session cache.


master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/50d1c7b4
branch_7x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a4cc66bd
{code}

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644870#comment-16644870
 ] 

Shalin Shekhar Mangar commented on SOLR-12739:
--

Found and fixed a bug as well.
{code}
SOLR-12739: Use cluster instead of collection as the key for using legacy 
assignment.

master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/940a7303
branch_7x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/859559a3
{code}

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644867#comment-16644867
 ] 

Shalin Shekhar Mangar commented on SOLR-12739:
--

Pushed fix for {{AutoAddReplicasIntegrationTest.testSimple()}}

{code}
SOLR-12739: Fix failures in AutoAddReplicasIntegrationTest and its sub-class.

This test too makes assumptions about how replicas are placed. In the legacy 
assignment strategy, the replica of a given collection are spread equally 
across all nodes but with the new policy based strategy, all cores across 
collections are spread out. Therefore the assumptions in this test were wrong. 
I've changed this test to use the legacy assignment policy because testing the 
autoAddReplicas feature doesn't have to depend on new replica assignment 
strategies. This change also fixes a bug in Assign which used "collection" key 
instead of "cluster" to figure out which strategy to use.


master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/9f34a7c7
branch_7x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/d431c1b6
{code}

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2889 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2889/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseParallelGC

42 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([6434BA305606D163]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:835)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([6434BA305606D163]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:875)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1663 - Unstable

2018-10-10 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1663/

9 tests failed.
FAILED:  
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica

Error Message:
Expected new active leader null Live Nodes: [127.0.0.1:39827_solr, 
127.0.0.1:40762_solr, 127.0.0.1:40809_solr] Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
   "pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"raceDeleteReplica_false_shard1_replica_n2",
   "base_url":"https://127.0.0.1:46302/solr;,   
"node_name":"127.0.0.1:46302_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node6":{   
"core":"raceDeleteReplica_false_shard1_replica_n5",   
"base_url":"https://127.0.0.1:46302/solr;,   
"node_name":"127.0.0.1:46302_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected new active leader
null
Live Nodes: [127.0.0.1:39827_solr, 127.0.0.1:40762_solr, 127.0.0.1:40809_solr]
Last available state: 
DocCollection(raceDeleteReplica_false//collections/raceDeleteReplica_false/state.json/13)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"raceDeleteReplica_false_shard1_replica_n2",
  "base_url":"https://127.0.0.1:46302/solr;,
  "node_name":"127.0.0.1:46302_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node6":{
  "core":"raceDeleteReplica_false_shard1_replica_n5",
  "base_url":"https://127.0.0.1:46302/solr;,
  "node_name":"127.0.0.1:46302_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([640F4B7793C9D7D3:E192AA7FB3B9D19]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:327)
at 
org.apache.solr.cloud.DeleteReplicaTest.raceConditionOnDeleteAndRegisterReplica(DeleteReplicaTest.java:223)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 

[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_172) - Build # 23004 - Still Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23004/
Java: 32bit/jdk1.8.0_172 -server -XX:+UseParallelGC

9 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/25)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10004_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/25)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10004_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([7D03D2000AE577CF:FD23B72E1BA69F69]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2100 - Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2100/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

11 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple

Error Message:
Waiting for collection testSimple1 null Live Nodes: [127.0.0.1:33624_solr, 
127.0.0.1:55491_solr] Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/21)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{ "shard1":{  
 "range":"8000-",   "state":"active",   "replicas":{
 "core_node4":{   "core":"testSimple1_shard1_replica_n3",   
"base_url":"https://127.0.0.1:33624/solr;,   
"node_name":"127.0.0.1:33624_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node12":{   "core":"testSimple1_shard1_replica_n11",   
"base_url":"https://127.0.0.1:33624/solr;,   
"node_name":"127.0.0.1:33624_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false"}}}, "shard2":{   
"range":"0-7fff",   "state":"active",   "replicas":{ 
"core_node6":{   "core":"testSimple1_shard2_replica_n5",   
"base_url":"https://127.0.0.1:33624/solr;,   
"node_name":"127.0.0.1:33624_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   "leader":"true"},  
   "core_node10":{   "core":"testSimple1_shard2_replica_n9",
   "base_url":"https://127.0.0.1:38687/solr;,   
"node_name":"127.0.0.1:38687_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"true",   "nrtReplicas":"2",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Waiting for collection testSimple1
null
Live Nodes: [127.0.0.1:33624_solr, 127.0.0.1:55491_solr]
Last available state: 
DocCollection(testSimple1//collections/testSimple1/state.json/21)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{
"shard1":{
  "range":"8000-",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"testSimple1_shard1_replica_n3",
  "base_url":"https://127.0.0.1:33624/solr;,
  "node_name":"127.0.0.1:33624_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node12":{
  "core":"testSimple1_shard1_replica_n11",
  "base_url":"https://127.0.0.1:33624/solr;,
  "node_name":"127.0.0.1:33624_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false"}}},
"shard2":{
  "range":"0-7fff",
  "state":"active",
  "replicas":{
"core_node6":{
  "core":"testSimple1_shard2_replica_n5",
  "base_url":"https://127.0.0.1:33624/solr;,
  "node_name":"127.0.0.1:33624_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true"},
"core_node10":{
  "core":"testSimple1_shard2_replica_n9",
  "base_url":"https://127.0.0.1:38687/solr;,
  "node_name":"127.0.0.1:38687_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"true",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([7584D7D24ABE3B7C:4D37F32C6D4DEFAD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:280)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasIntegrationTest.testSimple(AutoAddReplicasIntegrationTest.java:107)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 

[jira] [Commented] (SOLR-12739) Make autoscaling policy based replica placement the default strategy for placing replicas

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644594#comment-16644594
 ] 

Shalin Shekhar Mangar commented on SOLR-12739:
--

Pushed fix for {{CloudSolrClientTest.testNonRetryableRequests()}}

master: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a66a7f31
branch_7x: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/b958e1be

> Make autoscaling policy based replica placement the default strategy for 
> placing replicas
> -
>
> Key: SOLR-12739
> URL: https://issues.apache.org/jira/browse/SOLR-12739
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12739.patch, SOLR-12739.patch, SOLR-12739.patch, 
> SOLR-12739.patch, SOLR-12739.patch
>
>
> Today the default placement strategy is the same one used since Solr 4.x 
> which is to select nodes on a round robin fashion. I propose to make the 
> autoscaling policy based replica placement as the default policy for placing 
> replicas.
> This is related to SOLR-12648 where even though we have default cluster 
> preferences, we don't use them unless a policy is also configured.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5664) New meaning of equal sign in StandardQueryParser

2018-10-10 Thread Martin Blom (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-5664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644561#comment-16644561
 ] 

Martin Blom commented on LUCENE-5664:
-

OK, I realize this late reply is just ridiculous, but ...

Yes, it's exactly the same comparison as the other operators, i.e. _name="some 
value"_ behaves just like _name>="some value" AND name<="some value"_.

 

> New meaning of equal sign in StandardQueryParser
> 
>
> Key: LUCENE-5664
> URL: https://issues.apache.org/jira/browse/LUCENE-5664
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/queryparser
>Affects Versions: 4.5, 4.8
>Reporter: Martin Blom
>Priority: Major
> Attachments: LUCENE-5664.patch
>
>
> The StandardSyntaxParser.jj has (undocumented?) support for the <, <=, > and 
> => operators that generate a TermRangeQueryNode. The equal operator, however, 
> behaves just like the colon and produces a regular Term node instead of a 
> TermRangeQueryNode.
> I've been using the attached patch in a project where we had to be able to 
> query the exact value of a field and I'm hoping there is interest to apply it 
> upstream.
> (Note that the colon operator works just as before, producing TermQuery or 
> PhraseQuery nodes.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644547#comment-16644547
 ] 

Shalin Shekhar Mangar commented on SOLR-12845:
--

Linking to SOLR-12739

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12845) Add a default cluster policy

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12845:
-
Attachment: SOLR-12845.patch

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Attachments: SOLR-12845.patch
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2018-10-10 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644537#comment-16644537
 ] 

Shalin Shekhar Mangar commented on SOLR-12845:
--

bq. Maybe this policy shoud not be strict=false but the other two policies 
could be ?

Actually, this has to be non-strict because it effectively sets 
maxShardsPerNode=1 for everyone. I've opened SOLR-12847 to cut over 
maxShardsPerNode to a policy rule but until then this has to be strict=false.

My original proposal (in the description) had strict=false only for the first 
rule but I have changed the defaults to use strict=false for all rules 
otherwise it prevents Solr from performing operations that were possible 
earlier due to violations.

As I expected, there are tons of test failures with the new defaults due to the 
assumptions on how replicas will be placed. I'm working through those test 
failures.

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Shalin Shekhar Mangar
>Priority: Major
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 876 - Unstable!

2018-10-10 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/876/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCoresAreDistributedAcrossNodes

Error Message:
[127.0.0.1:62957_solr, 127.0.0.1:62956_solr] expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: [127.0.0.1:62957_solr, 127.0.0.1:62956_solr] 
expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([648F6C778F21C1F7:CFB91B34EF49E6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest.testCoresAreDistributedAcrossNodes(CollectionsAPIDistributedZkTest.java:351)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  

Does ConcurrentMergeScheduler actually do smaller merges first?

2018-10-10 Thread Shawn Heisey
Before I open an issue, I would like to double-check my sanity, see if 
an issue is needed.


I have noticed that the javadoc for ConcurrentMergeScheduler says that 
it schedules smaller merges before larger merges.  In the past, I have 
seen evidence suggesting this is not actually the case, that it prefers 
larger merges first.


 background 

When importing millions of rows from a database using Solr's dataimport 
handler, the index will be merged quite frequently while that indexing 
occurs.  Eventually, it reaches a point where there are multiple merges 
scheduled simultaneously, so the the ongoing indexing thread will be 
paused until the number of merges drops below maxMergeCount.


If the smallest merge was being done first, then I don't think the 
observed behavior would be what happens.  What I would see happen in the 
past is that when a large merge gets scheduled, indexing is paused long 
enough for the database connection to time out and be disconnected, so 
when the import tries to resume indexing, it can't -- the source 
database connection is gone.  For MySQL databases, this timeout takes 
about ten minutes to happen. If the smallest merge had completed first, 
the count would have decreased long before the database connection could 
time out, and indexing would have resumed with no problems.


 end background 

The way that I have fixed this problem in the past is to increase 
maxMergeCount to 6.  When that's done, the incoming thread never gets 
paused, and the database connection doesn't time out.


I can see that the default for maxMergeCount was changed from 2 to 6 in 
2014 by LUCENE-6119.  So 5.0 and later probably might not have the 
problems I encountered as long as the scheduler is left at defaults ... 
but I suspect that the running order of merges goes larger to smaller, 
contrary to javadoc.  The code is pretty dense and I haven't completely 
deciphered it yet.


Thanks,
Shawn


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org