[jira] [Resolved] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-08 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-8389.
---
Resolution: Won't Fix

Hi,
this is not an Lucene issue. Please ask Atlassian Support to help you.

This is a bug tracker not a support forum, so unless there is a bug in an 
up-to-date Lucene version, please do not reopen this bug report.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-08 Thread Uwe Schindler (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler closed LUCENE-8389.
-

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22426 - Failure!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22426/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseSerialGC

9 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.api.collections.CollectionsAPIDistributedZkTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([DA12665BCBF46041]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([DA12665BCBF46041]:0)


FAILED:  org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([DA12665BCBF46041:491CFA60DF0A89E8]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
at 
org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas(TestPullReplica.java:303)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[jira] [Commented] (SOLR-11694) Remove extremely outdated UIMA contrib module

2018-07-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536524#comment-16536524
 ] 

David Smiley commented on SOLR-11694:
-

Thanks Alexandre!

> Remove extremely outdated UIMA contrib module
> -
>
> Key: SOLR-11694
> URL: https://issues.apache.org/jira/browse/SOLR-11694
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - UIMA
>Reporter: Cassandra Targett
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11694.patch
>
>
> A user on the [solr-user mailing list back in 
> June|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201706.mbox/%3CCANsk%2BC_PvZJ38AQ2VfzKRYSQn6c8b33kGvaXxR3qNS3GQ4VUKA%40mail.gmail.com%3E]
>  brought up the fact that IBM has bought Alchemy and keys are no longer 
> available to use Solr's UIMA contrib.
> Someone recently made a [similar 
> comment|https://lucene.apache.org/solr/guide/7_1/uima-integration.html#comment_7174]
>  to the Solr Ref Guide page and asking for a patch.
> I know next to nothing about UIMA, but figured it's worth an issue to 
> determine what to do here. I think folks are saying it's no longer usable? Or 
> maybe only usable by people who already have keys (which will possibly expire 
> at some point)?
> Anyone have an idea what needs to be done here? It seems we should have some 
> kind of answer, but if it's no longer usable perhaps we should retire the 
> contrib.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8386) Maybe a DocIdSetIterator may implement Bits?

2018-07-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536515#comment-16536515
 ] 

David Smiley commented on LUCENE-8386:
--

I think I see what you're getting at due to my first mention of Bits vs BitSet; 
maybe we confused each other ;) 

I'm pointing that ConjunctionDISI makes a special optimization for any of it's 
input DISI's of type BitSetIterator, and I think that's a shame since someone 
might have a similar DISI that is not necessarily a BitSetIterator precisely.  
From that observation, I hypothesized if a DISI might expose an optional Bits 
somehow, then ConjunctionDISI could do it's optimization more generically.

> Maybe a DocIdSetIterator may implement Bits?
> 
>
> Key: LUCENE-8386
> URL: https://issues.apache.org/jira/browse/LUCENE-8386
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Priority: Minor
>
> I was looking at ConjunctionDISI and noted the special case logic for DISI's 
> of type BitSetIterator. It seems to only need the more minimal Bits interface 
> though it makes references to BitSet specifically.   BitSetIterator is a 
> concrete class; it would be nice if a DISI could either implement an optional 
> interface to expose a Bits or perhaps implements Bits directly.  This would 
> allow other/custom DISIs that can implement a Bits quickly without being 
> forced to use BitSetIterator specifically.  Even DocIdSetIterator.all(...) 
> could implement this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536510#comment-16536510
 ] 

David Smiley commented on SOLR-12441:
-

I'm still not clear why the right-hand side pound is necessary; I think the 
left-hand (leading) pound is sufficient. e.g. this is fine:
 {{child#0/grandchild#0}}
{quote}Do you mean to strip childNum off the _NEST_PATH field?
{quote}
Yes – of the _indexed_ form *but not* the _stored_ form. The indexed form would 
look like {{child/grandchild}}. For an exact match (no all ancestors or all 
descendents), we can index using KeywordTokenizerFactory. In your Peyton 
Manning example, this would mean your child filter would be {{_NEST_PATH_:from 
AND name:Peyton*}}. See PathHierarchyTokenizerFactoryTest and the descendents 
vs ancestors distinction as well via two differently indexed fields for 
use-cases involving descendents and ancestors if we need that.  With some 
tricks we could use one field if we need all 3 (exact, descendants, ancestors).

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 6h 50m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r200871822
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update;
+
+import java.util.List;
+
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.processor.NestedUpdateProcessorFactory;
+import org.apache.solr.update.processor.UpdateRequestProcessor;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+public class TestNestedUpdateProcessor extends SolrTestCaseJ4 {
+
+  private static final char PATH_SEP_CHAR = '/';
+  private static final String[] childrenIds = { "2", "3" };
+  private static final String grandChildId = "4";
+  private static final String jDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  private static final String noIdChildren = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children\": [\n" +
+  "{\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  private static final String errDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children" + PATH_SEP_CHAR + "a\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @Before
+  public void 

[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r200871182
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessorFactory.java
 ---
@@ -0,0 +1,126 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.Collection;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+
+public class NestedUpdateProcessorFactory extends 
UpdateRequestProcessorFactory {
+
+  public UpdateRequestProcessor getInstance(SolrQueryRequest req, 
SolrQueryResponse rsp, UpdateRequestProcessor next ) {
+boolean storeParent = shouldStoreDocParent(req.getSchema());
+boolean storePath = shouldStoreDocPath(req.getSchema());
+if(!(storeParent || storePath)) {
+  return next;
+}
+return new NestedUpdateProcessor(req, rsp, 
shouldStoreDocParent(req.getSchema()), shouldStoreDocPath(req.getSchema()), 
next);
+  }
+
+  private static boolean shouldStoreDocParent(IndexSchema schema) {
+return 
schema.getFields().containsKey(IndexSchema.NEST_PARENT_FIELD_NAME);
+  }
+
+  private static boolean shouldStoreDocPath(IndexSchema schema) {
+return 
schema.getFields().containsKey(IndexSchema.NEST_PATH_FIELD_NAME);
+  }
+}
+
+class NestedUpdateProcessor extends UpdateRequestProcessor {
+  private static final String PATH_SEP_CHAR = "/";
+  private static final String NUM_SEP_CHAR = "#";
+  private static final String SINGULAR_VALUE_CHAR = " ";
+  private boolean storePath;
+  private boolean storeParent;
+  private String uniqueKeyFieldName;
+
+
+  protected NestedUpdateProcessor(SolrQueryRequest req, SolrQueryResponse 
rsp, boolean storeParent, boolean storePath, UpdateRequestProcessor next) {
+super(next);
+this.storeParent = storeParent;
+this.storePath = storePath;
+this.uniqueKeyFieldName = 
req.getSchema().getUniqueKeyField().getName();
+  }
+
+  @Override
+  public void processAdd(AddUpdateCommand cmd) throws IOException {
+SolrInputDocument doc = cmd.getSolrInputDocument();
+processDocChildren(doc, null);
+super.processAdd(cmd);
+  }
+
+  private void processDocChildren(SolrInputDocument doc, String fullPath) {
+for(SolrInputField field: doc.values()) {
+  int childNum = 0;
+  boolean isSingleVal = !(field.getValue() instanceof Collection);
+  for(Object val: field) {
+if(!(val instanceof SolrInputDocument)) {
+  // either all collection items are child docs or none are.
+  break;
+}
+final String fieldName = field.getName();
+
+if(fieldName.contains(PATH_SEP_CHAR)) {
+  throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, 
"Field name: '" + fieldName
+  + "' contains: '" + PATH_SEP_CHAR + "' , which is reserved 
for the nested URP");
+}
+final String sChildNum = isSingleVal ? SINGULAR_VALUE_CHAR : 
String.valueOf(childNum);
+SolrInputDocument cDoc = (SolrInputDocument) val;
+if(!cDoc.containsKey(uniqueKeyFieldName)) {
+  String parentDocId = 
doc.getField(uniqueKeyFieldName).getFirstValue().toString();
+  cDoc.setField(uniqueKeyFieldName, 
generateChildUniqueId(parentDocId, fieldName, sChildNum));
+}
+final String lastPath = fieldName + NUM_SEP_CHAR + sChildNum + 
NUM_SEP_CHAR;
+final String jointPath = fullPath == null ? lastPath : fullPath + 
PATH_SEP_CHAR + lastPath;
+

[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536494#comment-16536494
 ] 

Hoss Man commented on SOLR-12343:
-

Found one – it seems to be specific to the situation where {{overrequest==0}}, 
and the facet is nested under another facet?

playing the with values of {{top_over}} and {{top_refine}} it doesn't seem to 
matter if parent facet is refined, but the key is wether the top facet also 
uses {{overrequest:0}} (fails) or {{overrequest:999}} (passes)

 
{noformat}
   [junit4]   2> 9990 INFO  (qtp1276305453-48) [x:collection1] 
o.a.s.c.S.Request [collection1]  webapp=/solr path=/select 
params={df=text=false&_facet_={}=id=score=1048580=0=true=127.0.0.1:47372/solr/collection1=0=2=*:*={+all:{+type:terms,+field:all_ss,+limit:1,+refine:true,+overrequest:0+++,+facet:{+++cat_count:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'count+asc'+},+++cat_price:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'sum_p+asc',+facet:+{+sum_p:+'sum(price_i)'+}+}}+}+}=1531102182236=true=javabin}
 hits=9 status=0 QTime=17
   [junit4]   2> 9994 INFO  (qtp1276305453-49) [x:collection1] 
o.a.s.c.S.Request [collection1]  webapp=/solr path=/select 
params={df=text=false&_facet_={"refine":{"all":{"_p":[["z_all",{"cat_count":{"_l":["A","B","C"]},"cat_price":{"_l":["A","B","C"]}}]]}}}=2097152=127.0.0.1:47372/solr/collection1=0=2=*:*={+all:{+type:terms,+field:all_ss,+limit:1,+refine:true,+overrequest:0+++,+facet:{+++cat_count:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'count+asc'+},+++cat_price:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'sum_p+asc',+facet:+{+sum_p:+'sum(price_i)'+}+}}+}+}=1531102182236=true=false=javabin}
 hits=9 status=0 QTime=1
   [junit4]   2> 9996 INFO  (qtp1503674478-65) [x:collection1] 
o.a.s.c.S.Request [collection1]  webapp=/solr path=/select 
params={shards=127.0.0.1:54950/solr/collection1,127.0.0.1:47372/solr/collection1,127.0.0.1:52833/solr/collection1=debugQuery=true=*:*={+all:{+type:terms,+field:all_ss,+limit:1,+refine:true,+overrequest:0+++,+facet:{+++cat_count:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'count+asc'+},+++cat_price:{+type:terms,+field:cat_s,+limit:3,+overrequest:0+++,+refine:true,+sort:'sum_p+asc',+facet:+{+sum_p:+'sum(price_i)'+}+}}+}+}=true=0=json=2.2}
 hits=19 status=0 QTime=25
   [junit4]   2> 9997 ERROR 
(TEST-TestJsonFacetRefinement.testSortedFacetRefinementPushingNonRefinedBucketBackIntoTopN-seed#[775BF43EF8268D50])
 [] o.a.s.SolrTestCaseHS query failed JSON validation. error=mismatch: 
'X'!='C' @ facets/all/buckets/[0]/cat_count/buckets/[2]/val
   [junit4]   2>  expected =facets=={ count: 19,all:{ buckets:[   { val:z_all, 
count: 19,cat_count:{ buckets:[  {val:A,count:1},   
  {val:B,count:1}, {val:X,count:4},] },cat_price:{ 
buckets:[  {val:A,count:1,sum_p:1.0}, 
{val:B,count:1,sum_p:1.0}, {val:X,count:4,sum_p:4.0},] }} ] 
} }
   [junit4]   2>  response = {
   [junit4]   2>   "responseHeader":{
   [junit4]   2> "status":0,
   [junit4]   2> "QTime":25},
   [junit4]   2>   "response":{"numFound":19,"start":0,"maxScore":1.0,"docs":[]
   [junit4]   2>   },
   [junit4]   2>   "facets":{
   [junit4]   2> "count":19,
   [junit4]   2> "all":{
   [junit4]   2>   "buckets":[{
   [junit4]   2>   "val":"z_all",
   [junit4]   2>   "count":19,
   [junit4]   2>   "cat_price":{
   [junit4]   2> "buckets":[{
   [junit4]   2> "val":"A",
   [junit4]   2> "count":1,
   [junit4]   2> "sum_p":1.0},
   [junit4]   2>   {
   [junit4]   2> "val":"B",
   [junit4]   2> "count":1,
   [junit4]   2> "sum_p":1.0},
   [junit4]   2>   {
   [junit4]   2> "val":"C",
   [junit4]   2> "count":6,
   [junit4]   2> "sum_p":6.0}]},
   [junit4]   2>   "cat_count":{
   [junit4]   2> "buckets":[{
   [junit4]   2> "val":"A",
   [junit4]   2> "count":1},
   [junit4]   2>   {
   [junit4]   2> "val":"B",
   [junit4]   2> "count":1},
   [junit4]   2>   {
   [junit4]   2> "val":"C",
   [junit4]   2> "count":6}]}}]}}}
   [junit4]   2> 
   [junit4]   2> 1 INFO  
(TEST-TestJsonFacetRefinement.testSortedFacetRefinementPushingNonRefinedBucketBackIntoTopN-seed#[775BF43EF8268D50])
 [] o.a.s.SolrTestCaseJ4 ###Ending 
testSortedFacetRefinementPushingNonRefinedBucketBackIntoTopN

[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536491#comment-16536491
 ] 

ASF subversion and git services commented on SOLR-12412:


Commit fddf35cfebd3f612a5e5089e76aa02b105209e6d in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fddf35c ]

SOLR-12412: Leader should give up leadership when IndexWriter.tragedy occur


> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-08 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536490#comment-16536490
 ] 

ASF subversion and git services commented on SOLR-12412:


Commit 119717611094c755b271db6e7a8614fe9406bb5e in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1197176 ]

SOLR-12412: Leader should give up leadership when IndexWriter.tragedy occur


> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-08 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536481#comment-16536481
 ] 

Hoss Man commented on SOLR-12343:
-

which assertion?  stacktrace? reproduce line? .. does the seed actually 
reproduce?

There's virtually no randomization in the test at all, except for the number of 
fillter termss/overrequest.

If you're seeing you're seeing a seed that reproduces, it makes me wonder if 
there is an edge case / off by one error based on the number of buckets ... if 
the seed doesn't reproduce (reliably) then it makes me wonder if it's an edge 
case that has to do with with which order the shards respond (ie: how the 
merger initializes the datastructs that get merged)

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch, 
> SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12412:

Summary: Leader should give up leadership when IndexWriter.tragedy occur  
(was: Leader should give up leadership when IndexWriter.tragedy happen)

> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy happen

2018-07-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12412:

Summary: Leader should give up leadership when IndexWriter.tragedy happen  
(was: Leader should give up leadership when meet some kind of exceptions)

> Leader should give up leadership when IndexWriter.tragedy happen
> 
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 256 - Still Failing

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/256/

No tests ran.

Build Log:
[...truncated 22998 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2220 links (1775 relative) to 2998 anchors in 229 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.5.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:

[jira] [Commented] (SOLR-12412) Leader should give up leadership when meet some kind of exceptions

2018-07-08 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536476#comment-16536476
 ] 

Cao Manh Dat commented on SOLR-12412:
-

Final patch with another test and precommit fix. I will commit it soon.

> Leader should give up leadership when meet some kind of exceptions
> --
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-08 Thread changchun huang (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536456#comment-16536456
 ] 

changchun huang edited comment on LUCENE-8389 at 7/8/18 11:46 PM:
--

Thanks for quickly reply.

Definitely I am not talking about the JAVA Heap.

When we were triggering background re-index from Jira, we can see during the 
re-indexing, the physical memory was reserved, I think it should be caused by 
the Lucene.  we have 16 Heap, 64 Physical Memory allocated to the server. we 
could see the all Physical memory got reserved during the re-indexing(Jira 
background re-index, single thread).

The problem is, we could not even set memory limit only for Lucene as the 
typical situation is, Lucence is not a standalone application, and it is 
embedded as JAVA application, so in a heavy load JAVA Application server which 
really care about performance and downtime, re-index with only 1 singe thread 
still reserves all free physical memory left, and this has conflicts with JAVA 
application even we configure the same Xms and Xmx.

So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with 
G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1 
Evacuation Pause) (young) (to-space exhausted) increased a lot. During that 
time, we were having performance issue.


was (Author: changchun):
Thanks for quickly reply.

Definitely I am not talking about the JAVA Heap.

When we were triggering background re-index from Jira, we can see during the 
re-indexing, the physical memory was reserved by the Lucene. 16 Heap, 64 
Physical Memory allocated. we could see the all Physical memory got reserved 
during the re-indexing(Jira background re-index, single thread).

The problem is, we could not even set memory limit only for Lucene as the 
typical situation is, Lucence is not a standalone application, and it is 
embedded as JAVA application, so in a heavy load JAVA Application server which 
really care about performance and downtime, re-index with only 1 singe thread 
still reserves all free physical memory left, and this has conflicts with JAVA 
application even we configure the same Xms and Xmx.

So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with 
G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1 
Evacuation Pause) (young) (to-space exhausted) increased a lot. During that 
time, we were having performance issue.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-08 Thread changchun huang (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

changchun huang reopened LUCENE-8389:
-

Thanks for quickly reply.

Definitely I am not talking about the JAVA Heap.

When we were triggering background re-index from Jira, we can see during the 
re-indexing, the physical memory was reserved by the Lucene. 16 Heap, 64 
Physical Memory allocated. we could see the all Physical memory got reserved 
during the re-indexing(Jira background re-index, single thread).

The problem is, we could not even set memory limit only for Lucene as the 
typical situation is, Lucence is not a standalone application, and it is 
embedded as JAVA application, so in a heavy load JAVA Application server which 
really care about performance and downtime, re-index with only 1 singe thread 
still reserves all free physical memory left, and this has conflicts with JAVA 
application even we configure the same Xms and Xmx.

So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with 
G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1 
Evacuation Pause) (young) (to-space exhausted) increased a lot. During that 
time, we were having performance issue.

> Could not limit Lucene's memory consumption
> ---
>
> Key: LUCENE-8389
> URL: https://issues.apache.org/jira/browse/LUCENE-8389
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 3.3
> Environment: |Java Version|1.8.0_102|
> |Operating System|Linux 3.12.48-52.27-default|
> |Application Server Container|Apache Tomcat/8.5.6|
> |atabase JNDI address|mysql 
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> |Database version|5.6.27|
> |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 ( Revision: 
> jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> |Version|7.6.1|
>Reporter: changchun huang
>Assignee: Uwe Schindler
>Priority: Major
>
> We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> We configured 16GB Jira heap on 64GB server
> However, each time, when we run background re-index, the memory will be used 
> out by Lucene and we could not only limit its memory consumption.
> This definitely will cause overall performance issue on a system with heavy 
> load.
> We have around 500 concurrent users, 400K issues.
> Could you please help to advice if there were workaround  or fix for this?
> Thanks.
>  
> BTW: I did check a lot and found a blog introducing the new behavior of 
> Lucene 3.3
> [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html]
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+21) - Build # 2288 - Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2288/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

75 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.TestConfigSetsAPI

Error Message:
4 threads leaked from SUITE scope at org.apache.solr.cloud.TestConfigSetsAPI:   
  1) Thread[id=672, name=zkConnectionManagerCallback-203-thread-1, 
state=WAITING, group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)2) 
Thread[id=668, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@11-ea/java.lang.Thread.run(Thread.java:834)3) 
Thread[id=670, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[C64A58B984E55FF9]-SendThread(127.0.0.1:39549),
 state=TIMED_WAITING, group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)4) 
Thread[id=671, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[C64A58B984E55FF9]-EventThread,
 state=WAITING, group=TGRP-TestConfigSetsAPI] at 
java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestConfigSetsAPI: 
   1) Thread[id=672, name=zkConnectionManagerCallback-203-thread-1, 
state=WAITING, group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@11-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
at 
java.base@11-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
at 
java.base@11-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
   2) Thread[id=668, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@11-ea/java.lang.Thread.run(Thread.java:834)
   3) Thread[id=670, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[C64A58B984E55FF9]-SendThread(127.0.0.1:39549),
 state=TIMED_WAITING, group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)
   4) Thread[id=671, 
name=TEST-TestConfigSetsAPI.testUploadErrors-seed#[C64A58B984E55FF9]-EventThread,
 state=WAITING, group=TGRP-TestConfigSetsAPI]
at java.base@11-ea/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@11-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 668 - Unstable

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/668/

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=25694, 
name=cdcr-replicator-7633-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=25694, name=cdcr-replicator-7633-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError: 1605459907740434432 != 1605459907734142976
at __randomizedtesting.SeedInfo.seed([4C647113E271E934]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14219 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_4C647113E271E934-001/init-core-data-001
   [junit4]   2> 3436886 INFO  
(SUITE-CdcrBidirectionalTest-seed#[4C647113E271E934]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 3436888 INFO  
(SUITE-CdcrBidirectionalTest-seed#[4C647113E271E934]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 3436888 INFO  
(SUITE-CdcrBidirectionalTest-seed#[4C647113E271E934]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 3436900 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[4C647113E271E934]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 3436901 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[4C647113E271E934]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_4C647113E271E934-001/cdcr-cluster2-001
   [junit4]   2> 3436901 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[4C647113E271E934]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 3436901 INFO  (Thread-4051) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 3436901 INFO  (Thread-4051) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 3436917 ERROR (Thread-4051) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 3437001 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[4C647113E271E934]) [] 
o.a.s.c.ZkTestServer start zk server on port:39595
   [junit4]   2> 3437005 INFO  (zkConnectionManagerCallback-8524-thread-1) [
] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 3437010 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.Server jetty-9.4.11.v20180605; built: 2018-06-05T18:24:03.829Z; git: 
d5fc0523cfa96bfebfbda19606cad384d772f04c; jvm 1.8.0_172-b11
   [junit4]   2> 3437011 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 3437011 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 3437011 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.session node0 Scavenging every 60ms
   [junit4]   2> 3437011 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@468d449a{/solr,null,AVAILABLE}
   [junit4]   2> 3437013 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@788df29f{HTTP/1.1,[http/1.1]}{127.0.0.1:42532}
   [junit4]   2> 3437013 INFO  (jetty-launcher-8521-thread-1) [] 
o.e.j.s.Server Started @3437061ms
   [junit4]   2> 3437014 INFO  (jetty-launcher-8521-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=42532}
   [junit4]   2> 3437014 ERROR (jetty-launcher-8521-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   2> 3437014 

[JENKINS] Lucene-Solr-repro - Build # 941 - Unstable

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/941/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-NightlyTests-master/18/consoleText

[repro] Revision: 6d6e67140b44dfb45bd8aadc58e3b8bfb79f5016

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.method=test -Dtests.seed=AED86BF7D13E1839 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ca-ES -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=HdfsRestartWhileUpdatingTest 
-Dtests.seed=AED86BF7D13E1839 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ca-ES -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5f5e5dbfb558134ee33ec749be581c0c24a39b23
[repro] git fetch
[repro] git checkout 6d6e67140b44dfb45bd8aadc58e3b8bfb79f5016

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   HdfsRestartWhileUpdatingTest
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.HdfsRestartWhileUpdatingTest" -Dtests.showOutput=onerror 
-Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=AED86BF7D13E1839 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true -Dtests.badapples=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-BadApples-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ca-ES -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 38283 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   3/5 failed: org.apache.solr.cloud.hdfs.HdfsRestartWhileUpdatingTest
[repro] git checkout 5f5e5dbfb558134ee33ec749be581c0c24a39b23

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1959 - Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1959/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:63510 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:63510 within 3 ms
at __randomizedtesting.SeedInfo.seed([CA5BE89CBD6C3F6B]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:184)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:121)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:103)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:269)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:263)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:198)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:190)
at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth.setupClass(TestImpersonationWithHadoopAuth.java:83)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:63510 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:232)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:176)
... 32 more


FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
23 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=5126, name=ProcessThread(sid:0 cport:63510):, state=WAITING, 
group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
 at 
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442) 
at 
org.apache.zookeeper.server.PrepRequestProcessor.run(PrepRequestProcessor.java:123)
2) 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22424 - Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22424/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest

Error Message:
4 threads leaked from SUITE scope at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 1) Thread[id=10190, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[4485B60CA0050AF8]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2075)
 at 
java.base@10/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)2) 
Thread[id=10188, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@10/java.lang.Thread.run(Thread.java:844)3) 
Thread[id=10189, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[4485B60CA0050AF8]-SendThread(127.0.0.1:35287),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1054)4) 
Thread[id=10191, name=zkConnectionManagerCallback-4685-thread-1, state=WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest] at 
java.base@10/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)  
   at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2075)
 at 
java.base@10/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
 at 
java.base@10/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
 at java.base@10/java.lang.Thread.run(Thread.java:844)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 4 threads leaked from SUITE 
scope at org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest: 
   1) Thread[id=10190, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[4485B60CA0050AF8]-EventThread,
 state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest]
at java.base@10/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2075)
at 
java.base@10/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)
   2) Thread[id=10188, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-ChaosMonkeyNothingIsSafeTest]
at java.base@10/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.base@10/java.lang.Thread.run(Thread.java:844)
   3) Thread[id=10189, 
name=TEST-ChaosMonkeyNothingIsSafeTest.test-seed#[4485B60CA0050AF8]-SendThread(127.0.0.1:35287),
 state=TIMED_WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest]
at java.base@10/java.lang.Thread.sleep(Native Method)
at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1054)
   4) Thread[id=10191, name=zkConnectionManagerCallback-4685-thread-1, 
state=WAITING, group=TGRP-ChaosMonkeyNothingIsSafeTest]
at java.base@10/jdk.internal.misc.Unsafe.park(Native Method)
at 
java.base@10/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
at 
java.base@10/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2075)
at 
java.base@10/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:435)
at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1061)
at 
java.base@10/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1121)
at 
java.base@10/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base@10/java.lang.Thread.run(Thread.java:844)
at 

[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 97 - Still Unstable

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/97/

10 tests failed.
FAILED:  org.apache.solr.cloud.api.collections.ShardSplitTest.test

Error Message:
Wrong doc count on shard1_0. See SOLR-5309 expected:<176> but was:<177>

Stack Trace:
java.lang.AssertionError: Wrong doc count on shard1_0. See SOLR-5309 
expected:<176> but was:<177>
at 
__randomizedtesting.SeedInfo.seed([FD5E959775AA7D79:750AAA4DDB561081]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.checkDocCountsAndShardStates(ShardSplitTest.java:968)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.splitByUniqueKeyTest(ShardSplitTest.java:751)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.test(ShardSplitTest.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-8386) Maybe a DocIdSetIterator may implement Bits?

2018-07-08 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536402#comment-16536402
 ] 

Adrien Grand commented on LUCENE-8386:
--

Sorry for introducing some confusion, when I mentioned Bits-based iterators I 
was thinking of a DocIdSetIterator over a Bits instance that would check bits 
one by one to find the next match rather than an alternative to BitsetIterator.

> Maybe a DocIdSetIterator may implement Bits?
> 
>
> Key: LUCENE-8386
> URL: https://issues.apache.org/jira/browse/LUCENE-8386
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Reporter: David Smiley
>Priority: Minor
>
> I was looking at ConjunctionDISI and noted the special case logic for DISI's 
> of type BitSetIterator. It seems to only need the more minimal Bits interface 
> though it makes references to BitSet specifically.   BitSetIterator is a 
> concrete class; it would be nice if a DISI could either implement an optional 
> interface to expose a Bits or perhaps implements Bits directly.  This would 
> allow other/custom DISIs that can implement a Bits quickly without being 
> forced to use BitSetIterator specifically.  Even DocIdSetIterator.all(...) 
> could implement this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 938 - Unstable

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/938/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/92/consoleText

[repro] Revision: b7d14c50fbae3d11b32b9331287636c98730987a

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=39F0A905F9864303 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pl 
-Dtests.timezone=Pacific/Pohnpei -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=39F0A905F9864303 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=fr 
-Dtests.timezone=America/Phoenix -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=39F0A905F9864303 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=hu-HU -Dtests.timezone=Europe/Vienna -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.method=testDistributedQueue -Dtests.seed=39F0A905F9864303 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=bg 
-Dtests.timezone=Pacific/Niue -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.seed=39F0A905F9864303 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=bg -Dtests.timezone=Pacific/Niue 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5f5e5dbfb558134ee33ec749be581c0c24a39b23
[repro] git fetch
[repro] git checkout b7d14c50fbae3d11b32b9331287636c98730987a

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   MoveReplicaHDFSTest
[repro]   TestGenericDistributedQueue
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.IndexSizeTriggerTest|*.MoveReplicaHDFSTest|*.TestGenericDistributedQueue|*.TestTriggerIntegration"
 -Dtests.showOutput=onerror  -Dtests.seed=39F0A905F9864303 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=hu-HU 
-Dtests.timezone=Europe/Vienna -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 21526 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue
[repro]   4/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of master
[repro] git fetch
[repro] git checkout master

[...truncated 4 lines...]
[repro] git merge --ff-only

[...truncated 329 lines...]
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3300 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=39F0A905F9864303 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=hu-HU -Dtests.timezone=Europe/Vienna 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 34434 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 5f5e5dbfb558134ee33ec749be581c0c24a39b23

[...truncated 8 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1064 - Still Failing

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1064/

No tests ran.

Build Log:
[...truncated 22952 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2233 links (1787 relative) to 3129 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10.0.1) - Build # 2286 - Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2286/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

3 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
Timeout occured while waiting response from server at: 
http://127.0.0.1:45063/twdo

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: http://127.0.0.1:45063/twdo
at 
__randomizedtesting.SeedInfo.seed([E7DC1D7AF9C60A2B:6F8822A0573A67D3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:654)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1677)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createCollection(AbstractFullDistribZkTestBase.java:1704)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:71)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Resolved] (SOLR-12370) NullPointerException on MoreLikeThisComponent

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-12370.
--
Resolution: Information Provided

> NullPointerException on MoreLikeThisComponent
> -
>
> Key: SOLR-12370
> URL: https://issues.apache.org/jira/browse/SOLR-12370
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: MoreLikeThis
>Affects Versions: 7.3.1
>Reporter: Gilles Bodart
>Priority: Major
>
> I'm trying to use the MoreLikeThis component under a suggest call, but I 
> receive a npe every time (here's the stacktrace)
> {code:java}
> java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.MoreLikeThisComponent.process(MoreLikeThisComponent.java:127)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:295)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
> at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
> at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
> ...{code}
> and here's the config of my requestHandlers:
> {code:java}
> 
> 
> true
> 10
> default
> true
> default
> wordbreak
> true
> true
> 10
> true
> true
> 5
> 5
> 10
> 5
> true
> _text_
> on
> content description title
> true
> html
> b
> /b
> 
> 
> suggest
> spellcheck
> mlt
> highlight
> 
> 
> 
> {code}
> I also tried with 
> {code:java}
> on{code}
> When I call
> {code:java}
> /mlt?df=_text_=pann=_text_
> {code}
>  it works fine but with
> {code:java}
> /suggest?df=_text_=pann=_text_
> {code}
> I got the npe
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-11-ea+21) - Build # 679 - Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/679/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseSerialGC

97 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([A0F314D84D658696:825270A5E809E0D3]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:125)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:325)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:268)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1114)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:621)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence(TestModelManagerPersistence.java:168)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[jira] [Commented] (SOLR-12300) Unpopulated SolrDocument using Custom DocTransformer

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536339#comment-16536339
 ] 

Alexandre Rafalovitch commented on SOLR-12300:
--

Jira formatting is complaining. Basically return an array with the field 
name

> Unpopulated SolrDocument using Custom DocTransformer
> 
>
> Key: SOLR-12300
> URL: https://issues.apache.org/jira/browse/SOLR-12300
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2, 7.3
> Environment: Microsoft Windows 10 Enterprise
> Version 10.0.14393 Build 14393
>Reporter: Landon Petzoldt
>Priority: Major
>
> When attempting to tag more than 2 fields with transformers, the documents' 
> fields are not populated except for the id field. This seems to be specific 
> to Solr 7+ as this was not an issue in Solr 6.4.2. The issue only seems to be 
> present for custom transformers, and the default transformers seem to 
> populate correctly.
> Steps for Reproduction in Solr 7.2 or 7.3
>  # Create Java project and import {{*solr-core*}} and {{*solr-solrj*}} 
> library jars.
>  # Create classes {{*BlankTransformerFactory*}} and {{*BlankTransformer*}} 
> [see gist|https://gist.github.com/clurdish/8683e56ea1b93978f7027844537a0232]
>  # Build project and put resulting jar into {{*solr\contrib\plugins*}}
>  # Create sample solr core {{*testcore*}}
>  # Add configuration to the core's {{*solrconfig.xml*}} (see below)
>  # Load sample documents into core (see below)
>  # (2 fields) Navigate to 
> {{http://localhost:8983/solr/testcore/select?q=*:*=true=Author:[blanktrans],Title:[blanktrans],id,layer}}
>  *_all documents are returned correctly_*
> # (3 fields) Navigate to 
> {{http://localhost:8983/solr/testcore/select?q=*:*=true=Author:[blanktrans],Title:[blanktrans],id,layer:[blanktrans]}}
>  *_only id field is returned_*
> *{{solrconfig.xml}}*
> ...
> {{ />}}
> ...
> {{ class="debug.solr.plugins.transformers.BlankTransformerFactory" />}}
> ...
> *{{sample_data.json}}*
> {
>   "id": "1",
>   "Title": ["The Big Tree"],
>   "layer": ["fims"]
> },
> {
>   "id": "2",
>   "Title": ["Far Far Away"],
>   "layer": ["buildings"]
> },
> {
>   "id": "3",
>   "Title": ["Way Too Long"],
>   "Author": ["Slim Jim McGee"],
>   "layer": ["fims"]
> },
> {
>   "id": "4",
>   "Author": ["Rumplestiltskin"],
>   "layer": ["tasks"]
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12300) Unpopulated SolrDocument using Custom DocTransformer

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536338#comment-16536338
 ] 

Alexandre Rafalovitch commented on SOLR-12300:
--

I am unable to reproduce this on 7.4. Your custom transformer shadows the 
underlying fields and they are not returned, regardless of the transformers 
count. However, if you add the implementation of getExtraRequestFields, at 
least the pass-through works.
{quote}
@Override
public String[] getExtraRequestFields() { 
  return new String[]{field\} ; 
}
{quote}
Could you review your test and/or my suggestion against Solr 7.4 and let me 
know if you still see any issues.

> Unpopulated SolrDocument using Custom DocTransformer
> 
>
> Key: SOLR-12300
> URL: https://issues.apache.org/jira/browse/SOLR-12300
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2, 7.3
> Environment: Microsoft Windows 10 Enterprise
> Version 10.0.14393 Build 14393
>Reporter: Landon Petzoldt
>Priority: Major
>
> When attempting to tag more than 2 fields with transformers, the documents' 
> fields are not populated except for the id field. This seems to be specific 
> to Solr 7+ as this was not an issue in Solr 6.4.2. The issue only seems to be 
> present for custom transformers, and the default transformers seem to 
> populate correctly.
> Steps for Reproduction in Solr 7.2 or 7.3
>  # Create Java project and import {{*solr-core*}} and {{*solr-solrj*}} 
> library jars.
>  # Create classes {{*BlankTransformerFactory*}} and {{*BlankTransformer*}} 
> [see gist|https://gist.github.com/clurdish/8683e56ea1b93978f7027844537a0232]
>  # Build project and put resulting jar into {{*solr\contrib\plugins*}}
>  # Create sample solr core {{*testcore*}}
>  # Add configuration to the core's {{*solrconfig.xml*}} (see below)
>  # Load sample documents into core (see below)
>  # (2 fields) Navigate to 
> {{http://localhost:8983/solr/testcore/select?q=*:*=true=Author:[blanktrans],Title:[blanktrans],id,layer}}
>  *_all documents are returned correctly_*
> # (3 fields) Navigate to 
> {{http://localhost:8983/solr/testcore/select?q=*:*=true=Author:[blanktrans],Title:[blanktrans],id,layer:[blanktrans]}}
>  *_only id field is returned_*
> *{{solrconfig.xml}}*
> ...
> {{ />}}
> ...
> {{ class="debug.solr.plugins.transformers.BlankTransformerFactory" />}}
> ...
> *{{sample_data.json}}*
> {
>   "id": "1",
>   "Title": ["The Big Tree"],
>   "layer": ["fims"]
> },
> {
>   "id": "2",
>   "Title": ["Far Far Away"],
>   "layer": ["buildings"]
> },
> {
>   "id": "3",
>   "Title": ["Way Too Long"],
>   "Author": ["Slim Jim McGee"],
>   "layer": ["fims"]
> },
> {
>   "id": "4",
>   "Author": ["Rumplestiltskin"],
>   "layer": ["tasks"]
> }



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12403) CSVLoader cannot split fields that contain new lines

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch reassigned SOLR-12403:


Assignee: Alexandre Rafalovitch

> CSVLoader cannot split fields that contain new lines
> 
>
> Key: SOLR-12403
> URL: https://issues.apache.org/jira/browse/SOLR-12403
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Affects Versions: 7.3
>Reporter: Alexandre Rafalovitch
>Assignee: Alexandre Rafalovitch
>Priority: Minor
>
> It is possible to import CSV that contains newlines in the field content, it 
> just needs to be escaped.
> However, if that field is split, any content from lines after the first is 
> lost. It does not matter if the split character is new line or anything else, 
> existing or not.
> Example
> {code:java}
> id,text1,text2
> 1,"t1.line1
> t1.line2
> t1.line3",t2
> 2,t1.oneline,t2.oneline
> {code}
> {code:java}
> // bin/solr create -c splittest
> // bin/post -c splittest test.csv (creates 
> "text1":["t1.line1\nt1.line2\nt1.line3"])
> // bin/post -c splittest -params "f.text1.split=true=^" 
> test.csv (creates "text1":["t1.line1"])
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+21) - Build # 7407 - Still Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7407/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseParallelGC

44 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLException: Software caused connection abort: recv failed
at 
__randomizedtesting.SeedInfo.seed([88D458BF32B2315C:AA753CC297DE5719]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:125)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:325)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:268)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:263)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1114)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:621)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testFilePersistence(TestModelManagerPersistence.java:168)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 

[jira] [Updated] (SOLR-12412) Leader should give up leadership when meet some kind of exceptions

2018-07-08 Thread Cao Manh Dat (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cao Manh Dat updated SOLR-12412:

Attachment: SOLR-12412.patch

> Leader should give up leadership when meet some kind of exceptions
> --
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3306) Possibility to specify proxy settings for http-transport

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-3306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-3306.
-
   Resolution: Won't Fix
Fix Version/s: (was: 3.5)

UIMA has been removed

> Possibility to specify proxy settings for http-transport
> 
>
> Key: SOLR-3306
> URL: https://issues.apache.org/jira/browse/SOLR-3306
> Project: Solr
>  Issue Type: New Feature
>  Components: Schema and Analysis
>Affects Versions: 3.5
> Environment: OS agnostic and version agnostic
>Reporter: Peter Litsegård
>Priority: Major
>  Labels: alchemy, proxy, solr, uima
>
> This is not an "issue" as such, rather it's a proposal for an enhancement. 
> When using the Solr UIMA-plugin I run into "connection timeout errors" as our 
> Solr-instance is running behind a firewall and the UIMA-plugin is unable to 
> connect to, say, the Alchemy-API service. I've tried to specify JAVA_OPTS 
> settings to no avail. It would be great if there would be a possibility to 
> specify proxy settings in the solrconfig.xml file, thus making it possible to 
> route http-calls through that proxy.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-4428) Update SolrUIMA wiki page

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-4428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-4428.
-
Resolution: Won't Fix

UIMA has been removed. WIKI was updated to reflect that.

> Update SolrUIMA wiki page
> -
>
> Key: SOLR-4428
> URL: https://issues.apache.org/jira/browse/SOLR-4428
> Project: Solr
>  Issue Type: Task
>Reporter: Tommaso Teofili
>Assignee: Tommaso Teofili
>Priority: Minor
>
> SolrUIMA wiki page (see http://wiki.apache.org/solr/SolrUIMA) is actually 
> outdated and needs to be updated ont the following topics:
> * proper XML configuration
> * how to use existing UIMA analyzers
> * what's the default configuration
> * how to change the default configuration



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10318) Make sure Solr UIMA example configuration is working

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-10318.
--
Resolution: Won't Fix

UIMA has been removed

> Make sure Solr UIMA example configuration is working
> 
>
> Key: SOLR-10318
> URL: https://issues.apache.org/jira/browse/SOLR-10318
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - UIMA, examples
>Reporter: Tommaso Teofili
>Priority: Major
>
> Current Solr UIMA example is using a configuration which involves outdated 
> annotators, that should be adjusted in order to avoid confusion for end users 
> when looking at the documentation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-08 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536139#comment-16536139
 ] 

mosh commented on SOLR-12441:
-

{quote}The childFilter {{from/name:Peyton*}} is not a valid query syntax due to 
the slash. Right?{quote}
It will only work if it is broken down by the transformer into to separate sub 
queries.
{quote}NEST_PATH:from#*# wildcards in the middle of a string can be problematic 
as it may match across 'from' to some other child label. Wether that's an issue 
here or not I'm not sure yet.{quote}
It will only match the ones which are children of the key nest, inside the 
parent document.
{quote}It's quite plausible it's indexed form ought to be stripped of the 
sibling IDs with PatternReplaceCharFilterFactory and then processed with 
PathHierarchyTokenizerFactory{quote}
Do you mean to strip childNum off the _NEST_PATH field?

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-2501) [contrib/uima] Make it possible to load AE descriptors both from filesystem and classpath

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-2501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-2501.
-
Resolution: Won't Fix

UIMA has been removed

> [contrib/uima] Make it possible to load AE descriptors both from filesystem 
> and classpath
> -
>
> Key: SOLR-2501
> URL: https://issues.apache.org/jira/browse/SOLR-2501
> Project: Solr
>  Issue Type: Improvement
>Reporter: Tommaso Teofili
>Priority: Major
> Attachments: SOLR-2501.patch
>
>
> AE can be loaded only from jars (via classpath) while it'd be good to make it 
> possible to load an AnalysisEngine  (specified in the analysisEngine element 
> of UIMA configuration inside solrconfig.xml) from filesystem



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3049) UpdateRequestProcessorChain for UIMA : runtimeParameters: not all types supported

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-3049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-3049.
-
Resolution: Won't Fix

UIMA has been removed

> UpdateRequestProcessorChain for UIMA : runtimeParameters: not all types 
> supported
> -
>
> Key: SOLR-3049
> URL: https://issues.apache.org/jira/browse/SOLR-3049
> Project: Solr
>  Issue Type: Bug
>  Components: update
>Reporter: Harsh P
>Priority: Minor
>  Labels: uima, update_request_handler
> Attachments: SOLR-3049.patch
>
>
> solrconfig.xml file has an option to override certain UIMA runtime
> parameters in the UpdateRequestProcessorChain section.
> There are certain UIMA annotators like RegexAnnotator which define
> "runtimeParameters" value as an Array which is not currently supported
> in the Solr-UIMA interface.
> In java/org/apache/solr/uima/processor/ae/OverridingParamsAEProvider.java,
> private Object getRuntimeValue(AnalysisEngineDescription desc, String
> attributeName) function defines override for UIMA analysis engine
> runtimeParameters as they are passed to UIMA Analysis Engine.
> runtimeParameters which are currently supported in the Solr-UIMA interface 
> are:
>  String
>  Integer
>  Boolean
>  Float
> I have made a hack to fix this issue to add Array support. I would
> like to submit that as a patch if no one else is working on fixing
> this issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3736) UIMA requires commons-beanutils

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-3736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-3736.
-
Resolution: Won't Fix

UIMA has been removed.

> UIMA requires commons-beanutils
> ---
>
> Key: SOLR-3736
> URL: https://issues.apache.org/jira/browse/SOLR-3736
> Project: Solr
>  Issue Type: Bug
>  Components: contrib - UIMA
>Affects Versions: 4.0-BETA
>Reporter: Eric Pugh
>Priority: Major
> Attachments: uima_ivy.patch
>
>
> UIMA appears to require commons-beanutils, which is used by Velocity.  But if 
> you don't include/load velocity, then you don't get commons-beanutils, which 
> causes a stack trace:
> SEVERE: null:java.lang.RuntimeException: java.lang.NoClassDefFoundError: 
> org/apache/commons/beanutils/DynaProperty
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.sendError(SolrDispatchFilter.java:468)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:296)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1337)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:484)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:119)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:524)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:233)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1065)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:413)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:192)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:999)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:117)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:250)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:149)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:111)
>   at org.eclipse.jetty.server.Server.handle(Server.java:351)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:454)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:47)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:890)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:944)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:634)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:230)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:66)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:254)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:599)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:534)
>   at java.lang.Thread.run(Thread.java:680)
> Caused by: java.lang.NoClassDefFoundError: 
> org/apache/commons/beanutils/DynaProperty
>   at 
> org.apache.commons.digester.Digester.addBeanPropertySetter(Digester.java:2162)
>   at 
> org.apache.uima.alchemy.digester.keyword.XMLTextKeywordExctractionDigester.parseAlchemyXML(XMLTextKeywordExctractionDigester.java:40)
>   at 
> org.apache.uima.alchemy.annotator.AbstractAlchemyAnnotator.process(AbstractAlchemyAnnotator.java:124)
>   at 
> org.apache.uima.analysis_component.JCasAnnotator_ImplBase.process(JCasAnnotator_ImplBase.java:48)
>   at 
> org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.callAnalysisComponentProcess(PrimitiveAnalysisEngine_impl.java:377)
>   at 
> org.apache.uima.analysis_engine.impl.PrimitiveAnalysisEngine_impl.processAndOutputNewCASes(PrimitiveAnalysisEngine_impl.java:295)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3014) Improve UIMA UpdateRequestProcessor performances by providing UIMA-AS support

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-3014?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch resolved SOLR-3014.
-
   Resolution: Won't Fix
Fix Version/s: (was: 6.0)
   (was: 4.9)

UIMA has been removed.

> Improve UIMA UpdateRequestProcessor performances by providing UIMA-AS support
> -
>
> Key: SOLR-3014
> URL: https://issues.apache.org/jira/browse/SOLR-3014
> Project: Solr
>  Issue Type: Improvement
>  Components: update
>Affects Versions: 3.5
>Reporter: Tommaso Teofili
>Priority: Minor
>  Labels: uima, update_request_handler
>
> The current implementation uses in memory mechanism for instantiating UIMA 
> pipelines.
> Allowing the use of the UIMA-AS based instantiation can help much on 
> improving performances and lower latency of text processing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11694) Remove extremely outdated UIMA contrib module

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-11694:
-
Issue Type: Improvement  (was: Bug)

> Remove extremely outdated UIMA contrib module
> -
>
> Key: SOLR-11694
> URL: https://issues.apache.org/jira/browse/SOLR-11694
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - UIMA
>Reporter: Cassandra Targett
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11694.patch
>
>
> A user on the [solr-user mailing list back in 
> June|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201706.mbox/%3CCANsk%2BC_PvZJ38AQ2VfzKRYSQn6c8b33kGvaXxR3qNS3GQ4VUKA%40mail.gmail.com%3E]
>  brought up the fact that IBM has bought Alchemy and keys are no longer 
> available to use Solr's UIMA contrib.
> Someone recently made a [similar 
> comment|https://lucene.apache.org/solr/guide/7_1/uima-integration.html#comment_7174]
>  to the Solr Ref Guide page and asking for a patch.
> I know next to nothing about UIMA, but figured it's worth an issue to 
> determine what to do here. I think folks are saying it's no longer usable? Or 
> maybe only usable by people who already have keys (which will possibly expire 
> at some point)?
> Anyone have an idea what needs to be done here? It seems we should have some 
> kind of answer, but if it's no longer usable perhaps we should retire the 
> contrib.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11694) Remove extremely outdated UIMA contrib module

2018-07-08 Thread Alexandre Rafalovitch (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexandre Rafalovitch updated SOLR-11694:
-
Fix Version/s: 7.5
   master (8.0)

> Remove extremely outdated UIMA contrib module
> -
>
> Key: SOLR-11694
> URL: https://issues.apache.org/jira/browse/SOLR-11694
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - UIMA
>Reporter: Cassandra Targett
>Assignee: Alexandre Rafalovitch
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: SOLR-11694.patch
>
>
> A user on the [solr-user mailing list back in 
> June|https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201706.mbox/%3CCANsk%2BC_PvZJ38AQ2VfzKRYSQn6c8b33kGvaXxR3qNS3GQ4VUKA%40mail.gmail.com%3E]
>  brought up the fact that IBM has bought Alchemy and keys are no longer 
> available to use Solr's UIMA contrib.
> Someone recently made a [similar 
> comment|https://lucene.apache.org/solr/guide/7_1/uima-integration.html#comment_7174]
>  to the Solr Ref Guide page and asking for a patch.
> I know next to nothing about UIMA, but figured it's worth an issue to 
> determine what to do here. I think folks are saying it's no longer usable? Or 
> maybe only usable by people who already have keys (which will possibly expire 
> at some point)?
> Anyone have an idea what needs to be done here? It seems we should have some 
> kind of answer, but if it's no longer usable perhaps we should retire the 
> contrib.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-08 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r200841295
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessor.java 
---
@@ -0,0 +1,96 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update.processor;
+
+import java.io.IOException;
+import java.util.Objects;
+
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.common.SolrInputField;
+import org.apache.solr.request.SolrQueryRequest;
+import org.apache.solr.response.SolrQueryResponse;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.AddUpdateCommand;
+
+public class NestedUpdateProcessor extends UpdateRequestProcessor {
+  public static final String PATH_SEP_CHAR = "/";
+  private boolean storePath;
+  private boolean storeParent;
+  SolrQueryRequest req;
+
+
+  protected NestedUpdateProcessor(SolrQueryRequest req, SolrQueryResponse 
rsp, boolean storeParent, boolean storePath, UpdateRequestProcessor next) {
+super(next);
+this.req = req;
+this.storeParent = storeParent;
+this.storePath = storePath;
+  }
+
+  @Override
+  public void processAdd(AddUpdateCommand cmd) throws IOException {
+SolrInputDocument doc = cmd.getSolrInputDocument();
+String rootId = 
doc.getField(req.getSchema().getUniqueKeyField().getName()).getFirstValue().toString();
+processDocChildren(doc, rootId, null);
+super.processAdd(cmd);
+  }
+
+  private void processDocChildren(SolrInputDocument doc, String rootId, 
String fullPath) {
+int childNum = 0;
+for(SolrInputField field: doc.values()) {
+  for(Object val: field) {
+if(val instanceof SolrInputDocument) {
+  if(field.getName().contains(PATH_SEP_CHAR)) {
+throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, 
"Field name: '" + field.getName()
++ "' contains: '" + PATH_SEP_CHAR + "' , which is reserved 
for the nested URP");
+  }
+  final String jointPath = Objects.isNull(fullPath) ? 
field.getName(): String.join(PATH_SEP_CHAR, fullPath, field.getName());
+  SolrInputDocument cDoc = (SolrInputDocument) val;
+  
if(!cDoc.containsKey(req.getSchema().getUniqueKeyField().getName())) {
+cDoc.setField(req.getSchema().getUniqueKeyField().getName(), 
generateChildUniqueId(rootId, jointPath, childNum));
+  }
+  processChildDoc((SolrInputDocument) val, doc, rootId, jointPath);
+}
+++childNum;
+  }
+}
+  }
+
+  private void processChildDoc(SolrInputDocument sdoc, SolrInputDocument 
parent, String rootId, String fullPath) {
+if(storePath) {
+  setPathField(sdoc, fullPath);
+}
+if (storeParent) {
+  setParentKey(sdoc, parent);
+}
+processDocChildren(sdoc, rootId, fullPath);
+  }
+
+  private String generateChildUniqueId(String rootId, String childPath, 
int childNum) {
+return String.join(PATH_SEP_CHAR, rootId, childPath, 
Integer.toString(childNum));
--- End diff --

Yes; my use of "label" means the key of the doc, aka the pseudo-fieldname 
linking the parent to child.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #:

2018-07-08 Thread dsmiley
Github user dsmiley commented on the pull request:


https://github.com/apache/lucene-solr/commit/c8da19a0591124d575f30212076899c32d8db8b2#commitcomment-29633582
  
In 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessorFactory.java:
In 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessorFactory.java
 on line 54:
A space could lead to confusion; lets not do that.  I was thinking simply 
blank (empty string).  Would that work?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-08 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536110#comment-16536110
 ] 

David Smiley commented on SOLR-12441:
-

Few comments:
* The childFilter {{from/name:Peyton*}} is not a valid query syntax due to the 
slash.  Right?
* {{_NEST_PATH_:from#*#}} wildcards in the middle of a string can be 
problematic as it may match across 'from' to some other child label.  Wether 
that's an issue here or not I'm not sure yet.
* Note that \_NEST_PATH\_ need not be a string.  It's quite plausible it's 
indexed form ought to be stripped of the sibling IDs with 
PatternReplaceCharFilterFactory and then processed with 
PathHierarchyTokenizerFactory.  This would allow for more efficient term 
queries instead of prefix/wildcard queries.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 6h 10m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-08 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536095#comment-16536095
 ] 

Yonik Seeley commented on SOLR-12343:
-

I'm occasionally getting a failure in 
testSortedFacetRefinementPushingNonRefinedBucketBackIntoTopN
I haven't tried digging into it yet though.

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch, 
> SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-08 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536065#comment-16536065
 ] 

Lucene/Solr QA commented on SOLR-12343:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  1m 49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 65m 33s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.IndexSizeTriggerTest |
|   | solr.cloud.api.collections.ShardSplitTest |
|   | solr.cloud.ForceLeaderTest |
|   | solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12343 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930572/SOLR-12343.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  validaterefguide  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / b7d14c5 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/140/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/140/testReport/ |
| modules | C: solr/core solr/solr-ref-guide U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/140/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch, 
> SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much 

[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 92 - Still Unstable

2018-07-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/92/

5 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:56043/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:60610/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:56043/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:60610/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([39F0A905F9864303:933D7AF74E5596D3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1106)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:993)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:290)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[JENKINS-EA] Lucene-Solr-BadApples-7.x-Linux (64bit/jdk-11-ea+21) - Build # 61 - Still Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/61/
Java: 64bit/jdk-11-ea+21 -XX:+UseCompressedOops -XX:+UseParallelGC

218 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([69F8BE0B18C8]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.DocValuesNotIndexedTest

Error Message:
Collection not found: dv_coll

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: dv_coll
at __randomizedtesting.SeedInfo.seed([69F8BE0B18C8]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:853)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.cloud.DocValuesNotIndexedTest.createCluster(DocValuesNotIndexedTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:874)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4722 - Still Unstable!

2018-07-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4722/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.cdcr.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster

Error Message:
Document mismatch on target after sync expected:<2000> but was:<1901>

Stack Trace:
java.lang.AssertionError: Document mismatch on target after sync 
expected:<2000> but was:<1901>
at 
__randomizedtesting.SeedInfo.seed([5E1B27B7BB987CD3:8A5E6CEE5CCECF28]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.cdcr.CdcrBootstrapTest.testBootstrapWithContinousIndexingOnSourceCluster(CdcrBootstrapTest.java:296)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated