[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-09 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r201222488
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -120,25 +122,41 @@ public void before() throws Exception {
 
   @Test
   public void testDeeplyNestedURPGrandChild() throws Exception {
+final String[] tests = {
+"/response/docs/[0]/id=='" + grandChildId + "'",
+"/response/docs/[0]/" + IndexSchema.NEST_PATH_FIELD_NAME + 
"=='children#0/grandChild#'"
+};
 indexSampleData(jDoc);
 
-assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*" + 
PATH_SEP_CHAR + "grandChild" + NUM_SEP_CHAR + "*" + NUM_SEP_CHAR,
+assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*" + 
PATH_SEP_CHAR + "grandChild" + NUM_SEP_CHAR + "*",
 "fl","*",
 "sort","id desc",
 "wt","json"),
-"/response/docs/[0]/id=='" + grandChildId + "'");
+tests);
   }
 
   @Test
   public void testDeeplyNestedURPChildren() throws Exception {
--- End diff --

I added a new sanity unit test for this URP


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16538078#comment-16538078
 ] 

mosh commented on SOLR-12441:
-

{quote}Having a query by ancestor ability would allow me to filter where 
"comment" is an ancestor.{quote}
Would this be fit for the use of PathHierarchyTokenizerFactory in conjunction 
with the ToParentBlockJoinQuery?

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 7.5h
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-07-09 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537381#comment-16537381
 ] 

David Smiley commented on SOLR-12519:
-

bq. my query is for all the children of "a:b", which contain the key "e" in them

That's one use-case, "all children that have a certain key" but there are 
perhaps more use-cases to be addressed in this issue?
# all child docs matching some custom query   (e.g. your example above of 
{{e:\*}})
# all child docs of a certain key, e.g. all "c" docs.
# ... and all their descendants Y/N
# all child docs at a certain path
# ... and all their descendants Y/N

In all these cases, we must always retrieve all ancestors up to the root 
document I think.

Perhaps some path syntax/language could articulate this?

Ultimately we'll want to utilize PathHierarchyTokenizer in some way.


> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537479#comment-16537479
 ] 

Steve Rowe commented on SOLR-12412:
---

Policeman Jenkins found a reproducing seed 
[https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/734/] for test failures 
that {{git bisect}} blames on commit {{fddf35c}} on this issue:

{noformat}
Checking out Revision 80eb5da7393dd25c8cb566194eb9158de212bfb2 
(refs/remotes/origin/branch_7x)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPullReplica 
-Dtests.method=testKillLeader -Dtests.seed=89003455250E12D2 -Dtests.slow=true 
-Dtests.locale=lg -Dtests.timezone=America/Rainy_River -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 60.4s J1 | TestPullReplica.testKillLeader <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Replica core_node4 not 
up to date after 10 seconds expected:<1> but was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([89003455250E12D2:C016C0E147B58684]:0)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:309)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)
[...]
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPullReplica 
-Dtests.method=testRemoveAllWriterReplicas -Dtests.seed=89003455250E12D2 
-Dtests.slow=true -Dtests.locale=lg -Dtests.timezone=America/Rainy_River 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII
   [junit4] FAILURE 24.6s J1 | TestPullReplica.testRemoveAllWriterReplicas <<<
   [junit4]> Throwable #1: java.lang.AssertionError: Replica core_node4 not 
up to date after 10 seconds expected:<1> but was:<0>
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([89003455250E12D2:1A0EA86E31F0FB7B]:0)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
   [junit4]>at 
org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas(TestPullReplica.java:303)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   [junit4]>at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
   [junit4]>at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   [junit4]>at 
java.base/java.lang.reflect.Method.invoke(Method.java:564)
   [junit4]>at java.base/java.lang.Thread.run(Thread.java:844)
[...]
   [junit4]   2> NOTE: test params are: 
codec=HighCompressionCompressingStoredFields(storedFieldsFormat=CompressingStoredFieldsFormat(compressionMode=HIGH_COMPRESSION,
 chunkSize=8218, maxDocsPerChunk=6, blockSize=10), 
termVectorsFormat=CompressingTermVectorsFormat(compressionMode=HIGH_COMPRESSION,
 chunkSize=8218, blockSize=10)), sim=RandomSimilarity(queryNorm=true): {}, 
locale=lg, timezone=America/Rainy_River
   [junit4]   2> NOTE: Mac OS X 10.11.6 x86_64/Oracle Corporation 9 
(64-bit)/cpus=3,threads=1,free=262884464,total=536870912
{noformat}

> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[JENKINS] Lucene-Solr-repro - Build # 945 - Still Unstable

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/945/

[...truncated 47 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/668/consoleText

[repro] Revision: 3fccbf9c39636ba1f53fd422154ca7a51016e93d

[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=4C647113E271E934 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=es-CU -Dtests.timezone=US/Aleutian 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
ad01baedbfacc4d7ccb375c6af6f79ff2c478509
[repro] git fetch
[repro] git checkout 3fccbf9c39636ba1f53fd422154ca7a51016e93d

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   CdcrBidirectionalTest
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.CdcrBidirectionalTest" -Dtests.showOutput=onerror  
-Dtests.seed=4C647113E271E934 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=es-CU -Dtests.timezone=US/Aleutian -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 1344 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro] git checkout ad01baedbfacc4d7ccb375c6af6f79ff2c478509

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 669 - Still Unstable

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/669/

9 tests failed.
FAILED:  
org.apache.solr.cloud.LeaderTragicEventTest.testOtherReplicasAreNotActive

Error Message:
Jetty Connector is not open: -2

Stack Trace:
java.lang.IllegalStateException: Jetty Connector is not open: -2
at 
__randomizedtesting.SeedInfo.seed([F7B46B6CC8859B4D:7200471BF47A22D5]:0)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.getBaseUrl(JettySolrRunner.java:499)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.getReplicaJetty(MiniSolrCloudCluster.java:539)
at 
org.apache.solr.cloud.LeaderTragicEventTest.corruptLeader(LeaderTragicEventTest.java:100)
at 
org.apache.solr.cloud.LeaderTragicEventTest.testOtherReplicasAreNotActive(LeaderTragicEventTest.java:150)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas

Error Message:
Replica core_node4 not up to 

[jira] [Created] (SOLR-12541) Metrics handler throws an error if there are transient cores

2018-07-09 Thread Nandakishore Krishna (JIRA)
Nandakishore Krishna created SOLR-12541:
---

 Summary: Metrics handler throws an error if there are transient 
cores
 Key: SOLR-12541
 URL: https://issues.apache.org/jira/browse/SOLR-12541
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 7.2.1
Reporter: Nandakishore Krishna


My environment is as follows
 * Solr 7.2.1 in standalone mode.
 * 32GB heap
 * 150 cores with data getting continuously ingested to ~10 cores and all of 
the cores queried.
 * transient cache size is set to 30.

The solr.xml is as follows
{code:xml}



  32
  true
  ${configSetBaseDir:configsets}

  
${socketTimeout:60}
${connTimeout:6}
  

{code}
I get the following error when I request for "/solr/admin/metrics".
{code}
{
"responseHeader": {
"status": 500,
"QTime": 31
},
"error": {
"msg": "Already closed",
"trace": "org.apache.lucene.store.AlreadyClosedException: Already 
closed\n\tat 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:337)\n\tat
 org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:351)\n\tat 
org.apache.solr.core.SolrCore.getIndexDir(SolrCore.java:330)\n\tat 
org.apache.solr.handler.ReplicationHandler.lambda$initializeMetrics$5(ReplicationHandler.java:849)\n\tat
 
org.apache.solr.util.stats.MetricUtils.convertGauge(MetricUtils.java:488)\n\tat 
org.apache.solr.util.stats.MetricUtils.convertMetric(MetricUtils.java:274)\n\tat
 
org.apache.solr.util.stats.MetricUtils.lambda$toMaps$4(MetricUtils.java:213)\n\tat
 java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)\n\tat 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)\n\tat 
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)\n\tat 
java.util.TreeMap$KeySpliterator.forEachRemaining(TreeMap.java:2746)\n\tat 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)\n\tat 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)\n\tat
 
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)\n\tat
 
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)\n\tat
 java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)\n\tat 
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)\n\tat 
org.apache.solr.util.stats.MetricUtils.toMaps(MetricUtils.java:211)\n\tat 
org.apache.solr.handler.admin.MetricsHandler.handleRequestBody(MetricsHandler.java:108)\n\tat
 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)\n\tat
 org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:735)\n\tat 
org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:716)\n\tat
 org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:497)\n\tat 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
 org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
 org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat 

[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 258 - Still unstable

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/258/

8 tests failed.
FAILED:  
org.apache.solr.cloud.AliasIntegrationTest.testDeleteAliasWithExistingCollectionName

Error Message:
collection_old should point to collection_new

Stack Trace:
java.lang.AssertionError: collection_old should point to collection_new
at 
__randomizedtesting.SeedInfo.seed([75A725E7C3349126:3D5F316875F471B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.AliasIntegrationTest.testDeleteAliasWithExistingCollectionName(AliasIntegrationTest.java:376)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.LeaderTragicEventTest.testOtherReplicasAreNotActive

Error Message:
Jetty Connector is not open: -2

Stack Trace:
java.lang.IllegalStateException: Jetty Connector is not open: -2
at 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_172) - Build # 22427 - Unstable!

2018-07-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22427/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseSerialGC

18 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F16ADC998060EDDE]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F16ADC998060EDDE]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F16ADC998060EDDE]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F16ADC998060EDDE]:0)


FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testAnalyzingInfixSuggesterBuildThenReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testAnalyzingInfixSuggesterBuildThenReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  
org.apache.solr.handler.component.InfixSuggestersTest.testAnalyzingInfixSuggesterBuildThenReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B]:0)


FAILED:  org.apache.solr.cloud.TestPullReplica.testKillLeader

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([F5F55A0D6C698C6B:BCE3AEB90ED2183D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:309)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 

BadApple report

2018-07-09 Thread Erick Erickson
Well, the list is getting smaller, which is A Good Thing. Here's the
list currently, full report attached.

Let me know if there are objections.

I may be overly optimistic, but we might finally be approaching the
point where we can start thinking about the backlog.

  **Annotations will be removed from the following tests because they
haven't failed in the last month.

  **Methods: 3
   MultiThreadedOCPTest.test
   TestLeaderInitiatedRecoveryThread.testPublishDownState
   TestSimDistributedQueue.testDistributedQueue


Failures in Hoss' reports for the last 4 collected reports.

Failures in the last 4 reports, will BadApple
   Report   Pct runsfails   test
 0123  11.9 2125146  CdcrBidirectionalTest.testBiDir
WILL NOT ANOTATE
 0123 400.0   12 22  HdfsRestartWhileUpdatingTest(suite)
 0123  88.9   21 12  HdfsRestartWhileUpdatingTest.test
 0123   1.1 1466 12  MathExpressionTest.testDistributions
 0123   1.0 1874 14  MetricsHistoryHandlerTest(suite)
 0123   0.2 1409 10  ReplicationFactorTest.test
 0123   1.0 1354 12  SaslZkACLProviderTest(suite)
 0123  74.1   83 62  SharedFSAutoReplicaFailoverTest(suite)
 0123  16.1  133 11
TestGenericDistributedQueue.testDistributedQueue
 0123   0.2 1936  5  TestIndexWriterDelete(suite)
 0123   4.1 1902 20  TestNamedUpdateProcessors.test
 0123   0.2 1924  7  TestRecovery(suite)
 0123   7.8 1904200  TestStressCloudBlindAtomicUpdates(suite)
DO NOT ENABLE LIST:
'IndexSizeTriggerTest.testMergeIntegration'
'IndexSizeTriggerTest.testMixedBounds'
'IndexSizeTriggerTest.testSplitIntegration'
'IndexSizeTriggerTest.testTrigger'
'TestControlledRealTimeReopenThread.testCRTReopen'
'TestICUNormalizer2CharFilter.testRandomStrings'
'TestICUTokenizerCJK'
'TestImpersonationWithHadoopAuth.testForwarding'
'TestLTRReRankingPipeline.testDifferentTopN'
'TestRandomChains'


DO NOT ANNOTATE LIST
CdcrBidirectionalTest.testBiDir
TestRandomChains.testRandomChainsWithLargeStrings

Processing file (History bit 3): HOSS-2018-07-09.csv
Processing file (History bit 2): HOSS-2018-07-02.csv
Processing file (History bit 1): HOSS-2018-06-25.csv
Processing file (History bit 0): HOSS-2018-06-18.csv


**Annotated tests/suites that didn't fail in the last 4 weeks.

  **Tests and suites removed from the next two lists because they were 
specified in 'doNotEnable' in the properties file
 no tests removed

  **Annotations will be removed from the following tests because they haven't 
failed in the last month.

  **Methods: 3
   MultiThreadedOCPTest.test
   TestLeaderInitiatedRecoveryThread.testPublishDownState
   TestSimDistributedQueue.testDistributedQueue

  **Suites: 0


Failures in Hoss' reports for the last 4 collected reports.

There were 793 unannotated tests that failed in Hoss' rollups. Ordered by the 
date I downloaded the rollup file, newest->oldest. See above for the dates the 
files were collected 
These tests were NOT BadApple'd or AwaitsFix'd
All tests that failed 4 weeks running will be BadApple'd unless there are 
objections

Failures in the last 4 reports..
   Report   Pct runsfails   test
 0123  11.9 2125146  CdcrBidirectionalTest.testBiDir
 0123 400.0   12 22  HdfsRestartWhileUpdatingTest(suite)
 0123  88.9   21 12  HdfsRestartWhileUpdatingTest.test
 0123   1.1 1466 12  MathExpressionTest.testDistributions
 0123   1.0 1874 14  MetricsHistoryHandlerTest(suite)
 0123   0.2 1409 10  ReplicationFactorTest.test
 0123   1.0 1354 12  SaslZkACLProviderTest(suite)
 0123  74.1   83 62  SharedFSAutoReplicaFailoverTest(suite)
 0123  16.1  133 11  
TestGenericDistributedQueue.testDistributedQueue
 0123   0.2 1936  5  TestIndexWriterDelete(suite)
 0123   4.1 1902 20  TestNamedUpdateProcessors.test
 0123   0.2 1924  7  TestRecovery(suite)
 0123   7.8 1904200  TestStressCloudBlindAtomicUpdates(suite)
 Will BadApple all tests above this line except ones listed at the 
top**



 0120.3 1191  3  
HadoopSSLCredentialProviderTest.testConstructorRequiresCredPath
 0120.3 1191  3  
HadoopSSLCredentialProviderTest.testGetCredentials
 0120.3 1193  5  
HdfsAutoAddReplicasIntegrationTest.testSimple
 012   73.7   65 48  
HdfsTlogReplayBufferedWhileIndexingTest(suite)
 0120.2 1402  3  HttpSolrClientBuilderTest(suite)
 0120.2 1403  3  LBHttpSolrClientBuilderTest(suite)
 0120.2 1407  3  
LBHttpSolrClientTest.testLBHttpSolrClientHttpClientResponseParserStringArray
 012  

[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537367#comment-16537367
 ] 

Erick Erickson commented on SOLR-12008:
---

All tests pass on Windows, so I'll commit this probably tomorrow unless someone 
finds something else.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch, 
> SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-09 Thread Yonik Seeley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537904#comment-16537904
 ] 

Yonik Seeley commented on SOLR-12343:
-

Looks good, thanks for tracking that down!

> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch, 
> SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537875#comment-16537875
 ] 

Cao Manh Dat edited comment on SOLR-12412 at 7/10/18 2:02 AM:
--

Thanks [~steve_rowe], I will take a look at the failure.

[~tomasflobbe] I tried to do that, but it will be quite complex, the process 
will be (not mention the race condition we can meet)
* The core publish itself as DOWN
* The core cancel it election context
* The core delete its index dir
* ... 

Given that tragic exception is not a frequent event and using Overseer will 
bring us some benefits like
* The update request that met the exception does not get blocked (async)
* Much cleaner and well-tested approach
* We can easily improve the solution to make it more robust. Ex: when delete 
replica failed because the node went down, Overseer can remove the replica from 
clusterstate (therefore even when the node come back, it will be automatically 
removed) then, Overseer can add a new replica in another node.


was (Author: caomanhdat):
Thanks [~steve_rowe], I will take a look at the failure.

[~tomasflobbe] I tried to do that, but it will be quite complex, the process 
will be (not mention the race condition we can meet)
* The core publish itself as DOWN
* The core cancel it election context
* The core delete its index dir
* ... 
Given that tragic exception is not a frequent event and using Overseer will 
bring us some benefits like
* The update request that met the exception does not get blocked (async)
* Much cleaner and well-tested approach
* We can easily improve the solution to make it more robust. Ex: when delete 
replica failed because the node went down, Overseer can remove the replica from 
clusterstate (therefore even when the node come back, it will be automatically 
removed) then, Overseer can add a new replica in another node.

> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537875#comment-16537875
 ] 

Cao Manh Dat commented on SOLR-12412:
-

Thanks [~steve_rowe], I will take a look at the failure.

[~tomasflobbe] I tried to do that, but it will be quite complex, the process 
will be (not mention the race condition we can meet)
* The core publish itself as DOWN
* The core cancel it election context
* The core delete its index dir
* ... 
Given that tragic exception is not a frequent event and using Overseer will 
bring us some benefits like
* The update request that met the exception does not get blocked (async)
* Much cleaner and well-tested approach
* We can easily improve the solution to make it more robust. Ex: when delete 
replica failed because the node went down, Overseer can remove the replica from 
clusterstate (therefore even when the node come back, it will be automatically 
removed) then, Overseer can add a new replica in another node.

> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537901#comment-16537901
 ] 

ASF subversion and git services commented on SOLR-12412:


Commit cd08c7ef13613ceb88c1caf7b25e793ed51d47af in lucene-solr's branch 
refs/heads/master from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=cd08c7e ]

SOLR-12412: release IndexWriter after getting tragic exception


> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537903#comment-16537903
 ] 

ASF subversion and git services commented on SOLR-12412:


Commit 0dc6ef996eab378bdd8329153bdecddbf89af9ee in lucene-solr's branch 
refs/heads/branch_7x from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0dc6ef9 ]

SOLR-12412: release IndexWriter after getting tragic exception


> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1960 - Still Unstable!

2018-07-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1960/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.SuggestComponentTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.SuggestComponentTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.SuggestComponentTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([CD721C422716FDFD]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([CD721C422716FDFD]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([CD721C422716FDFD]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([CD721C422716FDFD]:0)


FAILED:  
org.apache.solr.handler.component.SuggestComponentTest.testBuildOnStartupWithCoreReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)


FAILED:  
org.apache.solr.handler.component.SuggestComponentTest.testBuildOnStartupWithCoreReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)


FAILED:  
org.apache.solr.handler.component.SuggestComponentTest.testBuildOnStartupWithCoreReload

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([F7A686C744D181DD]:0)




Build Log:
[...truncated 14646 lines...]
   [junit4] Suite: org.apache.solr.handler.component.SuggestComponentTest
   [junit4]   2> Creating dataDir: 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/build/solr-core/test/J0/temp/solr.handler.component.SuggestComponentTest_F7A686C744D181DD-001/init-core-data-001
   [junit4]   2> 459342 WARN  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=2 numCloses=2
   [junit4]   2> 459342 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 459343 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 459344 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 459344 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 459344 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib,
 
/export/home/jenkins/workspace/Lucene-Solr-master-Solaris/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 459362 INFO  
(SUITE-SuggestComponentTest-seed#[F7A686C744D181DD]-worker) [] 

[jira] [Updated] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-8390:
--
Attachment: LUCENE-8390.patch

> Replace MatchesIteratorSupplier with IOSupplier
> ---
>
> Key: LUCENE-8390
> URL: https://issues.apache.org/jira/browse/LUCENE-8390
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8390.patch
>
>
> Matches objects are constructed using a deferred supplier pattern.  This is 
> currently done using a specialised MatchesIteratorSupplier interface, but 
> this can be deprecated/removed and replaced  with the generic IOSupplier in 
> the utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread Alan Woodward (JIRA)
Alan Woodward created LUCENE-8390:
-

 Summary: Replace MatchesIteratorSupplier with IOSupplier
 Key: LUCENE-8390
 URL: https://issues.apache.org/jira/browse/LUCENE-8390
 Project: Lucene - Core
  Issue Type: New Feature
Reporter: Alan Woodward
Assignee: Alan Woodward
 Attachments: LUCENE-8390.patch

Matches objects are constructed using a deferred supplier pattern.  This is 
currently done using a specialised MatchesIteratorSupplier interface, but this 
can be deprecated/removed and replaced  with the generic IOSupplier in the 
utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9882) ClassCastException: BasicResultContext cannot be cast to SolrDocumentList

2018-07-09 Thread dennis lucero (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536722#comment-16536722
 ] 

dennis lucero commented on SOLR-9882:
-

The issue is still happening on 7.3.1.
Can the patches be applied already?

> ClassCastException: BasicResultContext cannot be cast to SolrDocumentList
> -
>
> Key: SOLR-9882
> URL: https://issues.apache.org/jira/browse/SOLR-9882
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.3
>Reporter: Yago Riveiro
>Priority: Major
> Attachments: SOLR-9882-7987.patch, SOLR-9882.patch
>
>
> After talk with [~yo...@apache.org] in the mailing list I open this Jira 
> ticket
> I'm hitting this bug in Solr 6.3.0.
> null:java.lang.ClassCastException:
> org.apache.solr.response.BasicResultContext cannot be cast to
> org.apache.solr.common.SolrDocumentList
> at
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:315)
> at
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:153)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2213)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:303)
> at
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:254)
> at
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at
> org.eclipse.jetty.server.handler.StatisticsHandler.handle(StatisticsHandler.java:169)
> at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7864) timeAllowed causing ClassCastException

2018-07-09 Thread dennis lucero (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536721#comment-16536721
 ] 

dennis lucero commented on SOLR-7864:
-

The issue is still happening on 7.3.1.
Can the patches be applied already?

> timeAllowed causing ClassCastException
> --
>
> Key: SOLR-7864
> URL: https://issues.apache.org/jira/browse/SOLR-7864
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 5.2
>Reporter: Markus Jelsma
>Priority: Major
> Attachments: SOLR-7864.patch, SOLR-7864.patch, SOLR-7864_extra.patch
>
>
> If timeAllowed kicks in, following exception is thrown and user gets HTTP 500.
> {code}
> 65219 [qtp2096057945-19] ERROR org.apache.solr.servlet.SolrDispatchFilter  [  
>  search] – null:java.lang.ClassCastException: 
> org.apache.solr.response.ResultContext cannot be cast to 
> org.apache.solr.common.SolrDocumentList
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:275)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2064)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:654)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:450)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:227)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:196)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:215)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
> at org.eclipse.jetty.server.Server.handle(Server.java:497)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)
> at 
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
> at java.lang.Thread.run(Thread.java:745)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8392) TieredMergePolicy has broken assumptions when maxMergeAtOnce is greater than segmentsPerTier

2018-07-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8392:


 Summary: TieredMergePolicy has broken assumptions when 
maxMergeAtOnce is greater than segmentsPerTier
 Key: LUCENE-8392
 URL: https://issues.apache.org/jira/browse/LUCENE-8392
 Project: Lucene - Core
  Issue Type: Test
Reporter: Adrien Grand
 Attachments: LUCENE-8392.patch

While working on LUCENE-8391 I had test failures when {{maxMergeAtOnce}} is 
larger than {{segmentsPerTier}}. For instance when all segments are on the same 
tier, the maximum number of segments that is allowed in the index is 
{{segmentsPerTier}} but because the tiered policy wants to find 
{{maxMergeAtOnce}} segments to merge, no segments will get merged if there are 
less than {{maxMergeAtOnce}}  segments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8392) TieredMergePolicy has broken assumptions when maxMergeAtOnce is greater than segmentsPerTier

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8392:
-
Attachment: LUCENE-8392.patch

> TieredMergePolicy has broken assumptions when maxMergeAtOnce is greater than 
> segmentsPerTier
> 
>
> Key: LUCENE-8392
> URL: https://issues.apache.org/jira/browse/LUCENE-8392
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8392.patch
>
>
> While working on LUCENE-8391 I had test failures when {{maxMergeAtOnce}} is 
> larger than {{segmentsPerTier}}. For instance when all segments are on the 
> same tier, the maximum number of segments that is allowed in the index is 
> {{segmentsPerTier}} but because the tiered policy wants to find 
> {{maxMergeAtOnce}} segments to merge, no segments will get merged if there 
> are less than {{maxMergeAtOnce}}  segments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-07-09 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536709#comment-16536709
 ] 

Cao Manh Dat commented on SOLR-12297:
-

Hi, I skimmed through 
https://github.com/markrmiller/starburst/commit/f1134ee6581ffd11aea6c1413d0f4375aa8406d9.patch
 (the patch is huge). A large part of the patch is replacing HttpSolrClient by 
Http2SolrClient which I think can be postponed. Because
* HttpSolrClient and Http2SolrClient will coexist, by replacing them we can't 
sure that HttpSolrClient will work after future changes
* It makes the patch really large and hard to review.

Therefore, in my opinion, we should focus on 
* Http2SolrClient.java and some minimal tests.
* JettySolrRunner support booting up a server that accept http2 connection


> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-07-09 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536709#comment-16536709
 ] 

Cao Manh Dat edited comment on SOLR-12297 at 7/9/18 9:00 AM:
-

Hi, I skimmed through 
https://github.com/markrmiller/starburst/commit/f1134ee6581ffd11aea6c1413d0f4375aa8406d9.patch
 (the patch is huge). A large part of the patch is replacing HttpSolrClient by 
Http2SolrClient which I think can be postponed. Because
* HttpSolrClient and Http2SolrClient will coexist, by replacing it we can't 
sure that HttpSolrClient will work after future changes
* It makes the patch really large and hard to review.

Therefore, in my opinion, for this issue, we should focus on 
* Http2SolrClient.java and some minimal tests.
* JettySolrRunner support booting up a server that accepts http2 connection


was (Author: caomanhdat):
Hi, I skimmed through 
https://github.com/markrmiller/starburst/commit/f1134ee6581ffd11aea6c1413d0f4375aa8406d9.patch
 (the patch is huge). A large part of the patch is replacing HttpSolrClient by 
Http2SolrClient which I think can be postponed. Because
* HttpSolrClient and Http2SolrClient will coexist, by replacing them we can't 
sure that HttpSolrClient will work after future changes
* It makes the patch really large and hard to review.

Therefore, in my opinion, we should focus on 
* Http2SolrClient.java and some minimal tests.
* JettySolrRunner support booting up a server that accept http2 connection


> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536715#comment-16536715
 ] 

Adrien Grand commented on LUCENE-8390:
--

+1

I'd remove it directly without a deprecation phase since this API is very 
expert and only needed if you write custom queries.

> Replace MatchesIteratorSupplier with IOSupplier
> ---
>
> Key: LUCENE-8390
> URL: https://issues.apache.org/jira/browse/LUCENE-8390
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8390.patch
>
>
> Matches objects are constructed using a deferred supplier pattern.  This is 
> currently done using a specialised MatchesIteratorSupplier interface, but 
> this can be deprecated/removed and replaced  with the generic IOSupplier in 
> the utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1580 - Failure

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1580/

4 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([ED62F88457D8A0A3]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.FullSolrCloudDistribCmdsTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([ED62F88457D8A0A3]:0)


FAILED:  
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes

Error Message:


Stack Trace:
java.lang.NullPointerException
at 
__randomizedtesting.SeedInfo.seed([ED62F88457D8A0A3:73579C7C71FBEC2B]:0)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:278)
at 
org.apache.solr.cloud.TestDeleteCollectionOnDownNodes.deleteCollectionWithDownNodes(TestDeleteCollectionOnDownNodes.java:47)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-8392) TieredMergePolicy has broken assumptions when maxMergeAtOnce is greater than segmentsPerTier

2018-07-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536789#comment-16536789
 ] 

Adrien Grand commented on LUCENE-8392:
--

Here is a patch. It computes {{mergeFactor=min(segsPerTier, maxMergeAtOnce)}} 
and uses it instead of maxMergeAtOnce.

> TieredMergePolicy has broken assumptions when maxMergeAtOnce is greater than 
> segmentsPerTier
> 
>
> Key: LUCENE-8392
> URL: https://issues.apache.org/jira/browse/LUCENE-8392
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8392.patch
>
>
> While working on LUCENE-8391 I had test failures when {{maxMergeAtOnce}} is 
> larger than {{segmentsPerTier}}. For instance when all segments are on the 
> same tier, the maximum number of segments that is allowed in the index is 
> {{segmentsPerTier}} but because the tiered policy wants to find 
> {{maxMergeAtOnce}} segments to merge, no segments will get merged if there 
> are less than {{maxMergeAtOnce}}  segments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8391) Better tests for merge policies

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8391:
-
Attachment: LUCENE-8391.patch

> Better tests for merge policies
> ---
>
> Key: LUCENE-8391
> URL: https://issues.apache.org/jira/browse/LUCENE-8391
> Project: Lucene - Core
>  Issue Type: Test
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8391.patch
>
>
> Testing merge policies was hard in the past because it could only be done by 
> setting up and IndexWriter, adding documents and making sure that merges 
> behave as expected. The fact that MergePolicy doesn't need an IndexWriter 
> anymore (LUCENE-8330) should make things easier since we should now be able 
> to simulate merges without having to create an index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8391) Better tests for merge policies

2018-07-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8391:


 Summary: Better tests for merge policies
 Key: LUCENE-8391
 URL: https://issues.apache.org/jira/browse/LUCENE-8391
 Project: Lucene - Core
  Issue Type: Test
Reporter: Adrien Grand


Testing merge policies was hard in the past because it could only be done by 
setting up and IndexWriter, adding documents and making sure that merges behave 
as expected. The fact that MergePolicy doesn't need an IndexWriter anymore 
(LUCENE-8330) should make things easier since we should now be able to simulate 
merges without having to create an index.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12458) ADLS support for SOLR

2018-07-09 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536577#comment-16536577
 ] 

Lucene/Solr QA commented on SOLR-12458:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 6 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
56s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check licenses {color} | {color:green} 
 2m  7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  1m 47s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}147m 26s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.update.SoftAutoCommitTest |
|   | solr.cloud.api.collections.ShardSplitTest |
|   | solr.cloud.ForceLeaderTest |
|   | solr.cloud.BasicDistributedZkTest |
|   | solr.cloud.TestPullReplica |
|   | solr.handler.component.InfixSuggestersTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12930596/SOLR-12458.patch |
| Optional Tests |  checklicenses  validatesourcepatterns  ratsources  compile  
javac  unit  checkforbiddenapis  validaterefguide  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 1197176 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/141/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/141/testReport/ |
| modules | C: lucene solr solr/core solr/solr-ref-guide U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/141/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> ADLS support for SOLR
> -
>
> Key: SOLR-12458
> URL: https://issues.apache.org/jira/browse/SOLR-12458
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Hadoop Integration
>Affects Versions: master (8.0)
>Reporter: Mike Wingert
>Priority: Minor
>  Labels: features
> Fix For: master (8.0)
>
> Attachments: SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch, 
> SOLR-12458.patch, SOLR-12458.patch, SOLR-12458.patch
>
>
> This is to track ADLS support for SOLR.
> ADLS is a HDFS like API available in Microsoft Azure.   
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536567#comment-16536567
 ] 

Erick Erickson commented on SOLR-12008:
---

Well, I'm getting there. I've set up a Windows VM and started debugging 
and. the first thing I noticed was that the log files aren't going where I 
expect when running examples, e.g. {{ bin\solr start -e techproducts }} puts 
the logs in under {{ server }} rather than {{ example }} as they do on *nix.

I see in solr.cmd that SOLR_LOGS_DIR_QUOTED is set to "%SOLR_LOGS_DIR%", but 
then later SOLR_LOGS_DIR is changed so this appears to be a bona-fide bug.

WDYT about fixing this while I'm in here? It'll be a change for Windows users, 
but I consider it a bug currently. I'll change unless there are objections.

Meanwhile, on windows nothing except start/stop seem to work without throwing 
the error about can't find console.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8229) Add a method to Weight to retrieve matches for a single document

2018-07-09 Thread Alan Woodward (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536633#comment-16536633
 ] 

Alan Woodward commented on LUCENE-8229:
---

Simon added IOSupplier rather than IOConsumer, which would already work for 
this I think - I'll open an issue.

> Add a method to Weight to retrieve matches for a single document
> 
>
> Key: LUCENE-8229
> URL: https://issues.apache.org/jira/browse/LUCENE-8229
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.4
>
> Attachments: LUCENE-8229.patch, LUCENE-8229_small_improvements.patch
>
>  Time Spent: 5h 20m
>  Remaining Estimate: 0h
>
> The ability to find out exactly what a query has matched on is a fairly 
> frequent feature request, and would also make highlighters much easier to 
> implement.  There have been a few attempts at doing this, including adding 
> positions to Scorers, or re-writing queries as Spans, but these all either 
> compromise general performance or involve up-front knowledge of all queries.
> Instead, I propose adding a method to Weight that exposes an iterator over 
> matches in a particular document and field.  It should be used in a similar 
> manner to explain() - ie, just for TopDocs, not as part of the scoring loop, 
> which relieves some of the pressure on performance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-09 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r201059845
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -0,0 +1,171 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.update;
+
+import java.util.List;
+
+import org.apache.solr.SolrTestCaseJ4;
+import org.apache.solr.common.SolrException;
+import org.apache.solr.common.SolrInputDocument;
+import org.apache.solr.schema.IndexSchema;
+import org.apache.solr.update.processor.NestedUpdateProcessorFactory;
+import org.apache.solr.update.processor.UpdateRequestProcessor;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Rule;
+import org.junit.Test;
+import org.junit.rules.ExpectedException;
+
+public class TestNestedUpdateProcessor extends SolrTestCaseJ4 {
+
+  private static final char PATH_SEP_CHAR = '/';
+  private static final String[] childrenIds = { "2", "3" };
+  private static final String grandChildId = "4";
+  private static final String jDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  private static final String noIdChildren = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children\": [\n" +
+  "{\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  private static final String errDoc = "{\n" +
+  "\"add\": {\n" +
+  "\"doc\": {\n" +
+  "\"id\": \"1\",\n" +
+  "\"children" + PATH_SEP_CHAR + "a\": [\n" +
+  "{\n" +
+  "\"id\": \"2\",\n" +
+  "\"foo_s\": \"Yaz\"\n" +
+  "\"grandChild\": \n" +
+  "  {\n" +
+  " \"id\": \""+ grandChildId + "\",\n" +
+  " \"foo_s\": \"Jazz\"\n" +
+  "  },\n" +
+  "},\n" +
+  "{\n" +
+  "\"id\": \"3\",\n" +
+  "\"foo_s\": \"Bar\"\n" +
+  "}\n" +
+  "]\n" +
+  "}\n" +
+  "}\n" +
+  "}";
+
+  @Rule
+  public ExpectedException thrown = ExpectedException.none();
+
+  @BeforeClass
+  public static void beforeClass() throws Exception {
+initCore("solrconfig-update-processor-chains.xml", "schema15.xml");
+  }
+
+  @Before
+  public void 

[jira] [Commented] (SOLR-10648) Do not expose STOP.PORT and STOP.KEY in sysProps

2018-07-09 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537172#comment-16537172
 ] 

Andrzej Bialecki  commented on SOLR-10648:
--

These properties are also exposed via `/admin/metrics?group=jvm`. A different 
mechanism is used there for hiding sensitive properties, namely a section in 
{{solr.xml:/solr/metrics/hiddenSysProps/str}}.

These two mechanisms should at least be made aware of each other, eg. the 
metrics could both filter out "hidden" sysprops, as well as redact those in 
listed in {{RedactionUtils}}.

> Do not expose STOP.PORT and STOP.KEY in sysProps
> 
>
> Key: SOLR-10648
> URL: https://issues.apache.org/jira/browse/SOLR-10648
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jan Høydahl
>Priority: Major
>  Labels: security
>
> Currently anyone with HTTP access to Solr can see the Admin UI and all the 
> system properties. In there you find
> {noformat}
> -DSTOP.KEY=solrrocks
> -DSTOP.PORT=7983
> {noformat}
> This means that anyone with this info can shut down Solr by hitting that port 
> with the key (if it is not firewalled).
> I think the simple solution is to add STOP.PORT and STOP.KEY from 
> {{$SOLR_START_OPTS}} to the {{$SOLR_JETTY_CONFIG[@]}} variable. It will still 
> be visible on the cmdline but not over HTTP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8393) TieredMergePolicy needs to take into account the maximum segment size when computing the allowed number of segments

2018-07-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8393:


 Summary: TieredMergePolicy needs to take into account the maximum 
segment size when computing the allowed number of segments
 Key: LUCENE-8393
 URL: https://issues.apache.org/jira/browse/LUCENE-8393
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Adrien Grand


This is a pre-existing issue that was made more likely by LUCENE-7976, given 
that segments that are larger than the max segment size divided by 2 now are 
potential candidates for merging: when computing the allowed number of 
segments, TieredMergePolicy multiplies the level size by {{maxMergeAtOnce}} 
until it finds a level that isn't full. It currently assumes that the level 
size is always less than the maximum segment size, which might not always be 
true. This might lead to underestimating the allowed number of segments and 
in-turn causing excessive merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 93 - Still Unstable

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/93/

11 tests failed.
FAILED:  org.apache.solr.cloud.BasicDistributedZkTest.test

Error Message:
distrib-dup-test-chain-explicit: doc#3 has wrong value for regex_dup_A_s 
expected: but was:

Stack Trace:
java.lang.AssertionError: distrib-dup-test-chain-explicit: doc#3 has wrong 
value for regex_dup_A_s expected: but was:
at 
__randomizedtesting.SeedInfo.seed([460134BEE39B8E15:CE550B644D67E3ED]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.BasicDistributedZkTest.testUpdateProcessorsRunOnlyOnce(BasicDistributedZkTest.java:704)
at 
org.apache.solr.cloud.BasicDistributedZkTest.test(BasicDistributedZkTest.java:381)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-09 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r200996646
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/NestedUpdateProcessorFactory.java
 ---
@@ -72,8 +72,8 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
   }
 
   private void processDocChildren(SolrInputDocument doc, String fullPath) {
-int childNum = 0;
 for(SolrInputField field: doc.values()) {
+  int childNum = 0;
--- End diff --

I'll add a test with another key holding childDocs to ensure this bug does 
not resurface


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread Alan Woodward (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-8390.
---
   Resolution: Fixed
Fix Version/s: 7.5

Thanks for the review Adrien.  I did as you suggested and skipped the 
deprecation entirely.

> Replace MatchesIteratorSupplier with IOSupplier
> ---
>
> Key: LUCENE-8390
> URL: https://issues.apache.org/jira/browse/LUCENE-8390
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.5
>
> Attachments: LUCENE-8390.patch
>
>
> Matches objects are constructed using a deferred supplier pattern.  This is 
> currently done using a specialised MatchesIteratorSupplier interface, but 
> this can be deprecated/removed and replaced  with the generic IOSupplier in 
> the utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 4724 - Unstable!

2018-07-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4724/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

15 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([9CD338AB5C833E40:FDDA490487DD7E9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
at 
org.apache.solr.cloud.TestPullReplica.testRemoveAllWriterReplicas(TestPullReplica.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-8393) TieredMergePolicy needs to take into account the maximum segment size when computing the allowed number of segments

2018-07-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536899#comment-16536899
 ] 

Adrien Grand commented on LUCENE-8393:
--

Here is a patch.

> TieredMergePolicy needs to take into account the maximum segment size when 
> computing the allowed number of segments
> ---
>
> Key: LUCENE-8393
> URL: https://issues.apache.org/jira/browse/LUCENE-8393
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8393.patch
>
>
> This is a pre-existing issue that was made more likely by LUCENE-7976, given 
> that segments that are larger than the max segment size divided by 2 now are 
> potential candidates for merging: when computing the allowed number of 
> segments, TieredMergePolicy multiplies the level size by {{maxMergeAtOnce}} 
> until it finds a level that isn't full. It currently assumes that the level 
> size is always less than the maximum segment size, which might not always be 
> true. This might lead to underestimating the allowed number of segments and 
> in-turn causing excessive merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8393) TieredMergePolicy needs to take into account the maximum segment size when computing the allowed number of segments

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8393:
-
Attachment: LUCENE-8393.patch

> TieredMergePolicy needs to take into account the maximum segment size when 
> computing the allowed number of segments
> ---
>
> Key: LUCENE-8393
> URL: https://issues.apache.org/jira/browse/LUCENE-8393
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8393.patch
>
>
> This is a pre-existing issue that was made more likely by LUCENE-7976, given 
> that segments that are larger than the max segment size divided by 2 now are 
> potential candidates for merging: when computing the allowed number of 
> segments, TieredMergePolicy multiplies the level size by {{maxMergeAtOnce}} 
> until it finds a level that isn't full. It currently assumes that the level 
> size is always less than the maximum segment size, which might not always be 
> true. This might lead to underestimating the allowed number of segments and 
> in-turn causing excessive merging.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: SOLR-12427: Correct status for invalid 'start', 'rows'

2018-07-09 Thread Jason Gerlowski
I authored the recent change you're commenting on.  I agree with your
points; my only defense is consistency.  Several other nearby
assertions used the older try-catch based setup.

I'll fix the spot you objected to, and file a JIRA around cleaning
this up more broadly.  Having this elsewhere in the code encourages it
to creep in more.

Best,

Jason
On Fri, Jul 6, 2018 at 12:58 PM Chris Hostetter
 wrote:
>
>
> these tests should really be using...
>
>   SolrException e = expectThrows(() -> {...});
>
> ...and ideally we should be making assertions about the exception message
> as well (ie: does it say what we expect it to say? does it give the user
> the context of the failure -- ie: containing the "non_numeric_value" so
> they know what they did wrong?
>
>
> :private void validateCommonQueryParameters() throws Exception {
> :  ignoreException("parameter cannot be negative");
> : +
> : +try {
> : +  SolrQuery query = new SolrQuery();
> : +  query.setParam("start", "non_numeric_value").setQuery("*");
> : +  QueryResponse resp = query(query);
> : +  fail("Expected the last query to fail, but got response: " + resp);
> : +} catch (SolrException e) {
> : +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
> : +}
> : +
> :  try {
> :SolrQuery query = new SolrQuery();
> :query.setStart(-1).setQuery("*");
> : @@ -1228,6 +1238,15 @@ public class TestDistributedSearch extends 
> BaseDistributedSearchTestCase {
> :  } catch (SolrException e) {
> :assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
> :  }
> : +
> : +try {
> : +  SolrQuery query = new SolrQuery();
> : +  query.setParam("rows", "non_numeric_value").setQuery("*");
> : +  QueryResponse resp = query(query);
> : +  fail("Expected the last query to fail, but got response: " + resp);
> : +} catch (SolrException e) {
> : +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
> : +}
> :  resetExceptionIgnores();
> :}
> :  }
> :
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536838#comment-16536838
 ] 

ASF subversion and git services commented on LUCENE-8390:
-

Commit 963cceebffaeda880e611377e5818982b9d0e7ab in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=963ccee ]

LUCENE-8390: Replace MatchesIteratorSupplier with IOSupplier


> Replace MatchesIteratorSupplier with IOSupplier
> ---
>
> Key: LUCENE-8390
> URL: https://issues.apache.org/jira/browse/LUCENE-8390
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8390.patch
>
>
> Matches objects are constructed using a deferred supplier pattern.  This is 
> currently done using a specialised MatchesIteratorSupplier interface, but 
> this can be deprecated/removed and replaced  with the generic IOSupplier in 
> the utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8390) Replace MatchesIteratorSupplier with IOSupplier

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536837#comment-16536837
 ] 

ASF subversion and git services commented on LUCENE-8390:
-

Commit 80eb5da7393dd25c8cb566194eb9158de212bfb2 in lucene-solr's branch 
refs/heads/branch_7x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=80eb5da ]

LUCENE-8390: Replace MatchesIteratorSupplier with IOSupplier


> Replace MatchesIteratorSupplier with IOSupplier
> ---
>
> Key: LUCENE-8390
> URL: https://issues.apache.org/jira/browse/LUCENE-8390
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Major
> Attachments: LUCENE-8390.patch
>
>
> Matches objects are constructed using a deferred supplier pattern.  This is 
> currently done using a specialised MatchesIteratorSupplier interface, but 
> this can be deprecated/removed and replaced  with the generic IOSupplier in 
> the utils package



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Reopened] (LUCENE-8389) Could not limit Lucene's memory consumption

2018-07-09 Thread Michael Sokolov
Can you run a mirror instance and swap traffic, performing reindexing on an
online system, and then bring it online when complete?

On Sun, Jul 8, 2018, 7:46 PM changchun huang (JIRA)  wrote:

>
>  [
> https://issues.apache.org/jira/browse/LUCENE-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
> ]
>
> changchun huang reopened LUCENE-8389:
> -
>
> Thanks for quickly reply.
>
> Definitely I am not talking about the JAVA Heap.
>
> When we were triggering background re-index from Jira, we can see during
> the re-indexing, the physical memory was reserved by the Lucene. 16 Heap,
> 64 Physical Memory allocated. we could see the all Physical memory got
> reserved during the re-indexing(Jira background re-index, single thread).
>
> The problem is, we could not even set memory limit only for Lucene as the
> typical situation is, Lucence is not a standalone application, and it
> is embedded as JAVA application, so in a heavy load JAVA Application server
> which really care about performance and downtime, re-index with only 1
> singe thread still reserves all free physical memory left, and this has
> conflicts with JAVA application even we configure the same Xms and Xmx.
>
> So, I am asking a help like workaround, suggestion . We have JAVA 1.8 with
> G1GC, there is no OOME, but during re-index, the chance of (GC pause (G1
> Evacuation Pause) (young) (to-space exhausted) increased a lot. During that
> time, we were having performance issue.
>
> > Could not limit Lucene's memory consumption
> > ---
> >
> > Key: LUCENE-8389
> > URL: https://issues.apache.org/jira/browse/LUCENE-8389
> > Project: Lucene - Core
> >  Issue Type: Bug
> >  Components: core/index
> >Affects Versions: 3.3
> > Environment: |Java Version|1.8.0_102|
> > |Operating System|Linux 3.12.48-52.27-default|
> > |Application Server Container|Apache Tomcat/8.5.6|
> > |atabase JNDI address|mysql
> jdbc:mysql://mo-15e744225:3306/jira?useUnicode=true=UTF8=default_storage_engine=InnoDB|
> > |Database version|5.6.27|
> > |abase driver|MySQL Connector Java mysql-connector-java-5.1.34 (
> Revision: jess.bal...@oracle.com-20141014163213-wqbwpf1ok2kvo1om )|
> > |Version|7.6.1|
> >Reporter: changchun huang
> >Assignee: Uwe Schindler
> >Priority: Major
> >
> > We are running Jira 7.6.1 with Lucene 3.3 on SLES 12 SP1
> > We configured 16GB Jira heap on 64GB server
> > However, each time, when we run background re-index, the memory will be
> used out by Lucene and we could not only limit its memory consumption.
> > This definitely will cause overall performance issue on a system with
> heavy load.
> > We have around 500 concurrent users, 400K issues.
> > Could you please help to advice if there were workaround  or fix for
> this?
> > Thanks.
> >
> > BTW: I did check a lot and found a blog introducing the new behavior of
> Lucene 3.3
> > [http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
> ]
> >
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2290 - Unstable!

2018-07-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2290/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseSerialGC

11 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7690C0C3330FAE4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.component.InfixSuggestersTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([7690C0C3330FAE4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B0312F270CC07F8C]:0)


FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.TestSolrProperties

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([B0312F270CC07F8C]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B0312F270CC07F8C]:0)


FAILED:  org.apache.solr.client.solrj.embedded.TestSolrProperties.testProperties

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([B0312F270CC07F8C]:0)


FAILED:  
org.apache.solr.cloud.LeaderTragicEventTest.testOtherReplicasAreNotActive

Error Message:
Jetty Connector is not open: -2

Stack Trace:
java.lang.IllegalStateException: Jetty Connector is not open: -2
at 
__randomizedtesting.SeedInfo.seed([7690C0C3330FAE4:82DD207B0FCF437C]:0)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.getBaseUrl(JettySolrRunner.java:499)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.getReplicaJetty(MiniSolrCloudCluster.java:539)
at 
org.apache.solr.cloud.LeaderTragicEventTest.corruptLeader(LeaderTragicEventTest.java:100)
at 
org.apache.solr.cloud.LeaderTragicEventTest.testOtherReplicasAreNotActive(LeaderTragicEventTest.java:150)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 

[jira] [Commented] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536989#comment-16536989
 ] 

Erick Erickson commented on SOLR-12008:
---

That was bad. I got enthusiastic about changing 

{{ file:%DEFAULT_SERVER_DIR%\scripts\cloud-scripts\log4j2.xml }}
to
{{ file:///%SOLR_SERVER_DIR%\resources\log4j2-console.xml" }}

when it should have been:
 {{ file:///%DEFAULT_SERVER_DIR%\resources\log4j2-console.xml }}

Doh...

I'm about to run the full tests on Windows, but that'll take a loong time 
so I intend to push this later today or tomorrow.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch, 
> SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536989#comment-16536989
 ] 

Erick Erickson edited comment on SOLR-12008 at 7/9/18 2:43 PM:
---

That was bad. I got enthusiastic about changing 

{code} file:%DEFAULT_SERVER_DIR%\scripts\cloud-scripts\log4j2.xml {code}
to
{code} file:///%SOLR_SERVER_DIR%\resources\log4j2-console.xml" {code}

when it should have been:
{code} file:///%DEFAULT_SERVER_DIR%\resources\log4j2-console.xml {code}

Doh...

I'm about to run the full tests on Windows, but that'll take a loong time 
so I intend to push this later today or tomorrow.


was (Author: erickerickson):
That was bad. I got enthusiastic about changing 

{{ file:%DEFAULT_SERVER_DIR%\scripts\cloud-scripts\log4j2.xml }}
to
{{ file:///%SOLR_SERVER_DIR%\resources\log4j2-console.xml" }}

when it should have been:
 {{ file:///%DEFAULT_SERVER_DIR%\resources\log4j2-console.xml }}

Doh...

I'm about to run the full tests on Windows, but that'll take a loong time 
so I intend to push this later today or tomorrow.

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch, 
> SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:master: SOLR-12427: Correct status for invalid 'start', 'rows'

2018-07-09 Thread Erick Erickson
bq. " Having this elsewhere in the code encourages it to creep in more."

+1. While I hesitate to make lots of changes that are unnecessary, the
other side of that argument is that when we see code its easy to think
it's the norm rather than old

On Mon, Jul 9, 2018 at 4:52 AM, Jason Gerlowski  wrote:
> I authored the recent change you're commenting on.  I agree with your
> points; my only defense is consistency.  Several other nearby
> assertions used the older try-catch based setup.
>
> I'll fix the spot you objected to, and file a JIRA around cleaning
> this up more broadly.  Having this elsewhere in the code encourages it
> to creep in more.
>
> Best,
>
> Jason
> On Fri, Jul 6, 2018 at 12:58 PM Chris Hostetter
>  wrote:
>>
>>
>> these tests should really be using...
>>
>>   SolrException e = expectThrows(() -> {...});
>>
>> ...and ideally we should be making assertions about the exception message
>> as well (ie: does it say what we expect it to say? does it give the user
>> the context of the failure -- ie: containing the "non_numeric_value" so
>> they know what they did wrong?
>>
>>
>> :private void validateCommonQueryParameters() throws Exception {
>> :  ignoreException("parameter cannot be negative");
>> : +
>> : +try {
>> : +  SolrQuery query = new SolrQuery();
>> : +  query.setParam("start", "non_numeric_value").setQuery("*");
>> : +  QueryResponse resp = query(query);
>> : +  fail("Expected the last query to fail, but got response: " + resp);
>> : +} catch (SolrException e) {
>> : +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
>> : +}
>> : +
>> :  try {
>> :SolrQuery query = new SolrQuery();
>> :query.setStart(-1).setQuery("*");
>> : @@ -1228,6 +1238,15 @@ public class TestDistributedSearch extends 
>> BaseDistributedSearchTestCase {
>> :  } catch (SolrException e) {
>> :assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
>> :  }
>> : +
>> : +try {
>> : +  SolrQuery query = new SolrQuery();
>> : +  query.setParam("rows", "non_numeric_value").setQuery("*");
>> : +  QueryResponse resp = query(query);
>> : +  fail("Expected the last query to fail, but got response: " + resp);
>> : +} catch (SolrException e) {
>> : +  assertEquals(ErrorCode.BAD_REQUEST.code, e.code());
>> : +}
>> :  resetExceptionIgnores();
>> :}
>> :  }
>> :
>> :
>>
>> -Hoss
>> http://www.lucidworks.com/
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-07-09 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536963#comment-16536963
 ] 

Mark Miller commented on SOLR-12297:


I don’t plan on doing any of this price mail. If it goes in, it will be like 
SolrCloud and be a full switch on a major version. Basically my branch is way 
better than the main branch. Either people will want to switch to it or they 
won’t. 

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Register now for ApacheCon and save $250

2018-07-09 Thread Rich Bowen

Greetings, Apache software enthusiasts!

(You’re getting this because you’re on one or more dev@ or users@ lists 
for some Apache Software Foundation project.)


ApacheCon North America, in Montreal, is now just 80 days away, and 
early bird prices end in just two weeks - on July 21. Prices will be 
going up from $550 to $800 so register NOW to save $250, at 
http://apachecon.com/acna18


And don’t forget to reserve your hotel room. We have negotiated a 
special rate and the room block closes August 24. 
http://www.apachecon.com/acna18/venue.html


Our schedule includes over 100 talks and we’ll be featuring talks from 
dozens of ASF projects.,  We have inspiring keynotes from some of the 
brilliant members of our community and the wider tech space, including:


 * Myrle Krantz, PMC chair for Apache Fineract, and leader in the open 
source financing space
 * Cliff Schmidt, founder of Literacy Bridge (now Amplio) and creator 
of the Talking Book project

 * Bridget Kromhout, principal cloud developer advocate at Microsoft
 * Euan McLeod, Comcast engineer, and pioneer in streaming video

We’ll also be featuring tracks for Geospatial science, Tomcat, 
Cloudstack, and Big Data, as well as numerous other fields where Apache 
software is leading the way. See the full schedule at 
http://apachecon.com/acna18/schedule.html


As usual we’ll be running our Apache BarCamp, the traditional ApacheCon 
Hackathon, and the Wednesday evening Lighting Talks, too, so you’ll want 
to be there.


Register today at http://apachecon.com/acna18 and we’ll see you in Montreal!

--
Rich Bowen
VP, Conferences, The Apache Software Foundation
h...@apachecon.com
@ApacheCon

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12008) Settle a location for the "correct" log4j2.xml file.

2018-07-09 Thread Erick Erickson (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12008?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-12008:
--
Attachment: SOLR-12008.patch

> Settle a location for the "correct" log4j2.xml file.
> 
>
> Key: SOLR-12008
> URL: https://issues.apache.org/jira/browse/SOLR-12008
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: SOLR-12008.patch, SOLR-12008.patch, SOLR-12008.patch, 
> SOLR-12008.patch
>
>
> As part of SOLR-11934 I started looking at log4j.properties files. Waaay back 
> in 2015, the %C in "/solr/server/resources/log4j.properties" was changed to 
> use %c, but the file in "solr/example/resources/log4j.properties" was not 
> changed. That got me to looking around and there are a bunch of 
> log4j.properties files:
> ./solr/core/src/test-files/log4j.properties
> ./solr/example/resources/log4j.properties
> ./solr/solrj/src/test-files/log4j.properties
> ./solr/server/resources/log4j.properties
> ./solr/server/scripts/cloud-scripts/log4j.properties
> ./solr/contrib/dataimporthandler/src/test-files/log4j.properties
> ./solr/contrib/clustering/src/test-files/log4j.properties
> ./solr/contrib/ltr/src/test-files/log4j.properties
> ./solr/test-framework/src/test-files/log4j.properties
> Why do we have so many? After the log4j2 ticket gets checked in (SOLR-7887) I 
> propose the logging configuration files get consolidated. The question is 
> "how far"? 
> I at least want to get rid of the one in solr/example, users should use the 
> one in server/resources. Having to maintain these two separately is asking 
> for trouble.
> [~markrmil...@gmail.com] Do you have any wisdom on the properties file in 
> server/scripts/cloud-scripts?
> Anyone else who has a clue about why the other properties files were created, 
> especially the ones in contrib?
> And what about all the ones in various test-files directories? People didn't 
> create them for no reason, and I don't want to rediscover that it's a real 
> pain to try to re-use the one in server/resources for instance.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #415: Solr 12458

2018-07-09 Thread hbasejanitor
GitHub user hbasejanitor opened a pull request:

https://github.com/apache/lucene-solr/pull/415

Solr 12458

Support for ADLS

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hbasejanitor/lucene-solr SOLR-12458

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/415.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #415


commit 3dcd50e0e9e17a57601a90b0e0063abbae9fa442
Author: Mike Wingert 
Date:   2018-07-06T15:12:02Z

SOLR-12458 support for ADLS

commit 217351962e7616646f10e91a7c728f1222aeda7d
Author: Mike Wingert 
Date:   2018-07-06T15:13:33Z

SOLR-12458 support for ADLS

commit b7d2b1ce94ce1ca1acd44ff28725f178388b6674
Author: Mike Wingert 
Date:   2018-07-06T21:28:13Z

SOLR-12458 better cache for DirectoryEntry values

commit 6a69bcc35a8dbba22f96e7d687ef2472e40145ef
Author: Mike Wingert 
Date:   2018-07-09T14:49:33Z

SOLR-12458 fix cache timeout




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-07-09 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536963#comment-16536963
 ] 

Mark Miller edited comment on SOLR-12297 at 7/9/18 2:19 PM:


I don’t plan on doing any of this piece mail. If it goes in, it will be like 
SolrCloud and be a full switch on a major version. Basically my branch is way 
better than the main branch. Either people will want to switch to it or they 
won’t. 

Also any sort of review beyond high level comments now won’t be very useful. 
This isn’t even close to done. 


was (Author: markrmil...@gmail.com):
I don’t plan on doing any of this price mail. If it goes in, it will be like 
SolrCloud and be a full switch on a major version. Basically my branch is way 
better than the main branch. Either people will want to switch to it or they 
won’t. 

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-07-09 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16536973#comment-16536973
 ] 

Mark Miller commented on SOLR-12297:


Of course anyone can feel free to pull in what they want. I’m fully focused on 
addressing SolrCloud shortcomings and making a branch with passing tests. 

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8394) TieredMergePolicy's handling of the case that all segments are less than the floor segment size is fragile

2018-07-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537241#comment-16537241
 ] 

Adrien Grand commented on LUCENE-8394:
--

Here is a patch that ensures that the allowed segment count is always at least 
{{segmentsPerTier}}.

> TieredMergePolicy's handling of the case that all segments are less than the 
> floor segment size is fragile
> --
>
> Key: LUCENE-8394
> URL: https://issues.apache.org/jira/browse/LUCENE-8394
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8394.patch
>
>
> In the case that the index size is less than the floor segment size, the 
> allowed number of segments is always computed as 1. In practice, it doesn't 
> keep merging indefinitely only because {{doFindMerges}} has some logic that 
> skips merging if the number of candidates is less than maxMergeAtOnce. This 
> looks a bit fragile to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-09 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537285#comment-16537285
 ] 

David Smiley commented on SOLR-12441:
-

Although arguably how the field should be indexed is appropriate to discuss 
here.  Couldn't the ancestor/descendent query ability be useful _outside_ of 
the ChildDocTransformer -- and thus it's not the only consumer/user of this 
field?  For example, maybe I want to find all parent documents (say blog posts) 
that contain a comment child document that in turn has a comment child by a 
certain author "name" field.  So I want to find where did somebody comment on 
someone else's comment.  Having a query by ancestor ability would allow me to 
filter where "comment" is an ancestor.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 7h 20m
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 944 - Unstable

2018-07-09 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/944/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/97/consoleText

[repro] Revision: 9cd7daf8f907d93743463eb73cb921a3125c5909

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testSearchRate -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=America/Nassau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=America/Nassau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testEventFromRestoredState -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=America/Nassau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testNodeLostTriggerRestoreState -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=ar-EG -Dtests.timezone=America/Nassau -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCollectionsAPIViaSolrCloudCluster 
-Dtests.method=testCollectionCreateSearchDelete -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=bg 
-Dtests.timezone=America/Cambridge_Bay -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitWithChaosMonkey -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=vi 
-Dtests.timezone=Europe/Rome -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest -Dtests.method=test 
-Dtests.seed=FD5E959775AA7D79 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=vi -Dtests.timezone=Europe/Rome 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar 
-Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.method=testDistributedQueue -Dtests.seed=FD5E959775AA7D79 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt 
-Dtests.timezone=America/Jujuy -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestGenericDistributedQueue 
-Dtests.seed=FD5E959775AA7D79 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=pt -Dtests.timezone=America/Jujuy 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
119717611094c755b271db6e7a8614fe9406bb5e
[repro] git fetch
[repro] git checkout 9cd7daf8f907d93743463eb73cb921a3125c5909

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   TestTriggerIntegration
[repro]   TestCollectionsAPIViaSolrCloudCluster
[repro]   ShardSplitTest
[repro]   TestGenericDistributedQueue
[repro] ant compile-test

[...truncated 3318 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=25 
-Dtests.class="*.IndexSizeTriggerTest|*.TestTriggerIntegration|*.TestCollectionsAPIViaSolrCloudCluster|*.ShardSplitTest|*.TestGenericDistributedQueue"
 -Dtests.showOutput=onerror  -Dtests.seed=FD5E959775AA7D79 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ar 
-Dtests.timezone=US/Alaska -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 136055 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestGenericDistributedQueue
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   4/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro]   5/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest

[repro] Re-testing 100% failures at the tip of branch_7x
[repro] git fetch
[repro] git checkout branch_7x

[...truncated 4 lines...]
[repro] git merge --ff-only


[jira] [Updated] (LUCENE-8394) TieredMergePolicy's handling of the case that all segments are less than the floor segment size is fragile

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-8394:
-
Attachment: LUCENE-8394.patch

> TieredMergePolicy's handling of the case that all segments are less than the 
> floor segment size is fragile
> --
>
> Key: LUCENE-8394
> URL: https://issues.apache.org/jira/browse/LUCENE-8394
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8394.patch
>
>
> In the case that the index size is less than the floor segment size, the 
> allowed number of segments is always computed as 1. In practice, it doesn't 
> keep merging indefinitely only because {{doFindMerges}} has some logic that 
> skips merging if the number of candidates is less than maxMergeAtOnce. This 
> looks a bit fragile to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8394) TieredMergePolicy's handling of the case that all segments are less than the floor segment size is fragile

2018-07-09 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-8394:


 Summary: TieredMergePolicy's handling of the case that all 
segments are less than the floor segment size is fragile
 Key: LUCENE-8394
 URL: https://issues.apache.org/jira/browse/LUCENE-8394
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
 Attachments: LUCENE-8394.patch

In the case that the index size is less than the floor segment size, the 
allowed number of segments is always computed as 1. In practice, it doesn't 
keep merging indefinitely only because {{doFindMerges}} has some logic that 
skips merging if the number of candidates is less than maxMergeAtOnce. This 
looks a bit fragile to me.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #410: SOLR-12441: add deeply nested URP for nested ...

2018-07-09 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/410#discussion_r201076987
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/TestNestedUpdateProcessor.java ---
@@ -120,25 +122,41 @@ public void before() throws Exception {
 
   @Test
   public void testDeeplyNestedURPGrandChild() throws Exception {
+final String[] tests = {
+"/response/docs/[0]/id=='" + grandChildId + "'",
+"/response/docs/[0]/" + IndexSchema.NEST_PATH_FIELD_NAME + 
"=='children#0/grandChild#'"
+};
 indexSampleData(jDoc);
 
-assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*" + 
PATH_SEP_CHAR + "grandChild" + NUM_SEP_CHAR + "*" + NUM_SEP_CHAR,
+assertJQ(req("q", IndexSchema.NEST_PATH_FIELD_NAME + ":*" + 
PATH_SEP_CHAR + "grandChild" + NUM_SEP_CHAR + "*",
 "fl","*",
 "sort","id desc",
 "wt","json"),
-"/response/docs/[0]/id=='" + grandChildId + "'");
+tests);
   }
 
   @Test
   public void testDeeplyNestedURPChildren() throws Exception {
--- End diff --

This test tests the search behavior more so than literally what the URP is 
doing.  Can you make this more of a unit test around the result of the URP 
without actually indexing/searching anything?  And I would much prefer simpler 
test assertions that check a complete string value instead of making reference 
to many variables/constants that need to be concatenated.  This makes it 
plainly clear what the nest path will be; no mental gymnastics are needed to 
chase down vars/constants to figure it out.  I've mentioned before Yonik's 
advise on avoiding some constants in tests as it helps tests make us aware if 
in the future we might have a backwards-breaking change; so there are virtues 
to this way of thinking.  It would make this easier to review too.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when IndexWriter.tragedy occur

2018-07-09 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537291#comment-16537291
 ] 

Tomás Fernández Löbbe commented on SOLR-12412:
--

Thanks for working on this [~caomanhdat]! I'm wondering if there can be a way 
to give up leadership that's more light weight than adding/removing replicas 
while still being safe. Maybe something that ends up doing a core reload?

> Leader should give up leadership when IndexWriter.tragedy occur
> ---
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Attachments: SOLR-12412.patch, SOLR-12412.patch
>
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537051#comment-16537051
 ] 

mosh commented on SOLR-12441:
-

{quote}See PathHierarchyTokenizerFactoryTest and the descendents vs ancestors 
distinction as well via two differently indexed fields for use-cases involving 
descendents and ancestors if we need that. With some tricks we could use one 
field if we need all 3 (exact, descendants, ancestors).
{quote}
Oh this is perfect, it makes it so much easier. I was contemplating how the 
transformer could check for all three options(exact, descendants, ancestors).
 do you have any suggestions? I have been trying to use the "!field" 
transformer with boolean operators to no avail.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12441) Add deeply nested documents URP

2018-07-09 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537051#comment-16537051
 ] 

mosh edited comment on SOLR-12441 at 7/9/18 3:17 PM:
-

{quote}See PathHierarchyTokenizerFactoryTest and the descendents vs ancestors 
distinction as well via two differently indexed fields for use-cases involving 
descendents and ancestors if we need that. With some tricks we could use one 
field if we need all 3 (exact, descendants, ancestors).
{quote}
Oh this is perfect, it makes it so much easier. I was contemplating how the 
transformer could check for all three options(exact, descendants, ancestors).
 do you have any suggestions? I have been trying to use the "!field" 
transformer with boolean operators to no avail.

Perhaps this discussion should be moved to the [ChildDocTransformer 
ticket|https://issues.apache.org/jira/browse/SOLR-12519]


was (Author: moshebla):
{quote}See PathHierarchyTokenizerFactoryTest and the descendents vs ancestors 
distinction as well via two differently indexed fields for use-cases involving 
descendents and ancestors if we need that. With some tricks we could use one 
field if we need all 3 (exact, descendants, ancestors).
{quote}
Oh this is perfect, it makes it so much easier. I was contemplating how the 
transformer could check for all three options(exact, descendants, ancestors).
 do you have any suggestions? I have been trying to use the "!field" 
transformer with boolean operators to no avail.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-09 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537117#comment-16537117
 ] 

Adrien Grand commented on LUCENE-8383:
--

I went ahead and merged these issues so that it's easier to move forward with 
other changes related to TieredMP.

> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-09 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537132#comment-16537132
 ] 

Erick Erickson commented on LUCENE-8383:


Thanks!

> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12441) Add deeply nested documents URP

2018-07-09 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537081#comment-16537081
 ] 

David Smiley commented on SOLR-12441:
-

bq. Perhaps this discussion should be moved to the ChildDocTransformer ticket

Sure. This issue can be focused on what fields should be added and when and 
what their values look like.  Perhaps some other issue will ultimately add 
these new fields to a non-test schema and we'll need to then know how we need 
to index it.  At the moment we have an opt-in feature that requires the user 
opting in to not only add the URP but add the fields to their schema and know 
which field types should be used.

> Add deeply nested documents URP
> ---
>
> Key: SOLR-12441
> URL: https://issues.apache.org/jira/browse/SOLR-12441
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Assignee: David Smiley
>Priority: Major
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> As discussed in 
> [SOLR-12298|https://issues.apache.org/jira/browse/SOLR-12298], there ought to 
> be an URP to add metadata fields to childDocuments in order to allow a 
> transformer to rebuild the original document hierarchy.
> {quote}I propose we add the following fields:
>  # __nestParent__
>  # _nestLevel_
>  # __nestPath__
> __nestParent__: This field wild will store the document's parent docId, to be 
> used for building the whole hierarchy, using a new document transformer, as 
> suggested by Jan on the mailing list.
> _nestLevel_: This field will store the level of the specified field in the 
> document, using an int value. This field can be used for the parentFilter, 
> eliminating the need to provide a parentFilter, which will be set by default 
> as "_level_:queriedFieldLevel".
> _nestLevel_: This field will contain the full path, separated by a specific 
> reserved char e.g., '.'
>  for example: "first.second.third".
>  This will enable users to search for a specific path, or provide a regular 
> expression to search for fields sharing the same name in different levels of 
> the document, filtering using the level key if needed.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537106#comment-16537106
 ] 

ASF subversion and git services commented on LUCENE-8383:
-

Commit ad01baedbfacc4d7ccb375c6af6f79ff2c478509 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad01bae ]

LUCENE-8383: Fix computation of mergingBytes in TieredMergePolicy.


> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537104#comment-16537104
 ] 

ASF subversion and git services commented on LUCENE-8383:
-

Commit fa6a334edca530256521496c767058efbf27e796 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fa6a334 ]

LUCENE-8383: Fix computation of mergingBytes in TieredMergePolicy.


> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8385) Fix computation of the allowed segment count in TieredMergePolicy

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537105#comment-16537105
 ] 

ASF subversion and git services commented on LUCENE-8385:
-

Commit 41ddac5b44649bcc0e0a092b5262d94aa909ffaf in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=41ddac5 ]

LUCENE-8385: Fix computation of the allowed segment count in TieredMergePolicy.


> Fix computation of the allowed segment count in TieredMergePolicy
> -
>
> Key: LUCENE-8385
> URL: https://issues.apache.org/jira/browse/LUCENE-8385
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-8385.patch
>
>
> LUCENE-7976 removed the logic that decreases 'totIndexBytes` when a segment 
> is graced out because it is too large. This makes 'allowedSegmentCount' 
> overestimated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8385) Fix computation of the allowed segment count in TieredMergePolicy

2018-07-09 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537103#comment-16537103
 ] 

ASF subversion and git services commented on LUCENE-8385:
-

Commit 3caee20f46a0022c617a0dea54115268aaa3e121 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3caee20 ]

LUCENE-8385: Fix computation of the allowed segment count in TieredMergePolicy.


> Fix computation of the allowed segment count in TieredMergePolicy
> -
>
> Key: LUCENE-8385
> URL: https://issues.apache.org/jira/browse/LUCENE-8385
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: LUCENE-8385.patch
>
>
> LUCENE-7976 removed the logic that decreases 'totIndexBytes` when a segment 
> is graced out because it is too large. This makes 'allowedSegmentCount' 
> overestimated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8383) Fix computation of mergingBytes in TieredMergePolicy

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8383?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8383.
--
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Fix computation of mergingBytes in TieredMergePolicy
> 
>
> Key: LUCENE-8383
> URL: https://issues.apache.org/jira/browse/LUCENE-8383
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8383.patch
>
>
> It looks like LUCENE-7976 changed mergingBytes to be computed as the sum of 
> the sizes of eligible segments, rather than the sum of the sizes of segments 
> that are currently merging, which feels wrong.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8385) Fix computation of the allowed segment count in TieredMergePolicy

2018-07-09 Thread Adrien Grand (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-8385.
--
   Resolution: Fixed
Fix Version/s: 7.5
   master (8.0)

> Fix computation of the allowed segment count in TieredMergePolicy
> -
>
> Key: LUCENE-8385
> URL: https://issues.apache.org/jira/browse/LUCENE-8385
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: master (8.0), 7.5
>
> Attachments: LUCENE-8385.patch
>
>
> LUCENE-7976 removed the logic that decreases 'totIndexBytes` when a segment 
> is graced out because it is too large. This makes 'allowedSegmentCount' 
> overestimated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12343) JSON Field Facet refinement can return incorrect counts/stats for sorted buckets

2018-07-09 Thread Hoss Man (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16537306#comment-16537306
 ] 

Hoss Man commented on SOLR-12343:
-

Ok ... fresh eyes and i see the problem.

When {{final int overreq = 0}} we don't add any "filler" docs, which means that 
when the nested facet test happens, shardC0 and shardC1 disagree about the "top 
term" for the parent facet on the {{all_ss}} field -- shardC0 only knows about 
{{z_al}} while shardC1 has a tie between {{z_all} and {{some}} and {{some}} 
wins the tie due to index order -- so when that parent facet uses 
{{overrequest:0}} the initial merge logic doesn't have any contributions from 
shardC1 for the chosen {{all_ss:z_all}} bucket ... so it only knows to ask to 
refine the top3 child buckets it does know about (from shardC0): "A,B,C".  If 
the parent facet uses any overrequest larger then 0, then it would get the 
{{all_ss:z_all}} bucket from shardC1 as well, and have some child buckets to 
consider to know that C is a bad candidate, and it should be refining X instead.

On the flip side, when {{final int overreq = 1}} (or anything higher) the 
addition of even a few filler docs is enough to skew the {{all_ss}} term stats 
on shardC1, such that it *also* thinkgs {{z_all}} is the top term, so 
regardless of the amount of overrequest on the top facet, the phase #1 merge 
has buckets from both shards for the child facet to consider.



I remember when i was writing this test, and i include the {{some}} terms the 
entire point was to stress the case where the 2 shards disagree about the "top" 
term term from the parent facet -- but apparently when adding the filler 
docs/terms randomization i broke that so that it's not always true, it only 
happens when there are no filler docs.  But it also seems like an unfair test, 
because when they do disagree, there's no reason for hte merge logic to think X 
is a worthwhile term to refine. what mattes is that in this case, C is 
accurately refined

I'm working up a test fix...


> JSON Field Facet refinement can return incorrect counts/stats for sorted 
> buckets
> 
>
> Key: SOLR-12343
> URL: https://issues.apache.org/jira/browse/SOLR-12343
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Major
> Attachments: SOLR-12343.patch, SOLR-12343.patch, SOLR-12343.patch, 
> SOLR-12343.patch, SOLR-12343.patch
>
>
> The way JSON Facet's simple refinement "re-sorts" buckets after refinement 
> can cause _refined_ buckets to be "bumped out" of the topN based on the 
> refined counts/stats depending on the sort - causing _unrefined_ buckets 
> originally discounted in phase#2 to bubble up into the topN and be returned 
> to clients *with inaccurate counts/stats*
> The simplest way to demonstrate this bug (in some data sets) is with a 
> {{sort: 'count asc'}} facet:
>  * assume shard1 returns termX & termY in phase#1 because they have very low 
> shard1 counts
>  ** but *not* returned at all by shard2, because these terms both have very 
> high shard2 counts.
>  * Assume termX has a slightly lower shard1 count then termY, such that:
>  ** termX "makes the cut" off for the limit=N topN buckets
>  ** termY does not make the cut, and is the "N+1" known bucket at the end of 
> phase#1
>  * termX then gets included in the phase#2 refinement request against shard2
>  ** termX now has a much higher _known_ total count then termY
>  ** the coordinator now sorts termX "worse" in the sorted list of buckets 
> then termY
>  ** which causes termY to bubble up into the topN
>  * termY is ultimately included in the final result _with incomplete 
> count/stat/sub-facet data_ instead of termX
>  ** this is all indepenent of the possibility that termY may actually have a 
> significantly higher total count then termX across the entire collection
>  ** the key problem is that all/most of the other terms returned to the 
> client have counts/stats that are the cumulation of all shards, but termY 
> only has the contributions from shard1
> Important Notes:
>  * This scenerio can happen regardless of the amount of overrequest used. 
> Additional overrequest just increases the number of "extra" terms needed in 
> the index with "better" sort values then termX & termY in shard2
>  * {{sort: 'count asc'}} is not just an exceptional/pathelogical case:
>  ** any function sort where additional data provided shards during refinement 
> can cause a bucket to "sort worse" can also cause this problem.
>  ** Examples: {{sum(price_i) asc}} , {{min(price_i) desc}} , {{avg(price_i) 
> asc|desc}} , etc...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 734 - Unstable!

2018-07-09 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/734/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseG1GC

15 tests failed.
FAILED:  org.apache.solr.cloud.TestPullReplica.testKillLeader

Error Message:
Replica core_node4 not up to date after 10 seconds expected:<1> but was:<0>

Stack Trace:
java.lang.AssertionError: Replica core_node4 not up to date after 10 seconds 
expected:<1> but was:<0>
at 
__randomizedtesting.SeedInfo.seed([89003455250E12D2:C016C0E147B58684]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.solr.cloud.TestPullReplica.waitForNumDocsInAllReplicas(TestPullReplica.java:542)
at 
org.apache.solr.cloud.TestPullReplica.doTestNoLeader(TestPullReplica.java:490)
at 
org.apache.solr.cloud.TestPullReplica.testKillLeader(TestPullReplica.java:309)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)