[jira] [Commented] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465449#comment-16465449
 ] 

Munendra S N commented on SOLR-12303:
-

[^SOLR-12303.patch]
[~mkhludnev]
Using request.getPath(). Also, handled the special case for 
handleSelect=true(javadoc might need to be improved for getParentPath()).
Failing test is passing now
{code:java}
  /**
   *
   * {@link CommonParams#PATH} is given more priority than {@link 
CommonParams#QT}
   * When handleSelect=true and /select handler is not configured then path 
would be
   * picked from {@link CommonParams#QT}
   * When handleSelect=false and /select is not configured, request would not 
reach subquery
   * transformer. (main query itself would fail)
   * @return main query's path
   */
  private String getParentPath() {
String path = request.getPath();
SolrRequestHandler handler = request.getCore().getRequestHandler(path); // 
didn't find other ways to check if handler is configured for the path or not
if (handler != null) {
  return path;
}
return request.getParams().get(CommonParams.QT);
  }
{code}


> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-12303:

Attachment: SOLR-12303.patch

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12320) Not all multi-part post requests should create tmp files.

2018-05-06 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12320:
--

 Summary: Not all multi-part post requests should create tmp files.
 Key: SOLR-12320
 URL: https://issues.apache.org/jira/browse/SOLR-12320
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller


We create tmp files for multi-part posts because often they are uploaded files 
for Solr cell or something but we also sometimes write params only or params 
and updates as multi-part post. These should not create any tmp files.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12319) The static FileCleaningTracker we use to clean multi part post tmp files has race issues when used in tests with multiple JettySolrRunners.

2018-05-06 Thread Mark Miller (JIRA)
Mark Miller created SOLR-12319:
--

 Summary: The static FileCleaningTracker we use to clean multi part 
post tmp files has race issues when used in tests with multiple 
JettySolrRunners.
 Key: SOLR-12319
 URL: https://issues.apache.org/jira/browse/SOLR-12319
 Project: Solr
  Issue Type: Test
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Mark Miller
Assignee: Mark Miller






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11934) Visit Solr logging, it's too noisy.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465435#comment-16465435
 ] 

Mark Miller commented on SOLR-11934:


I'm pro reducing the amount of default index logging. It's very rarely been 
useful to me in debugging problems from a customer site. When you need that 
level of detail, it's usually when trying to trace things during a reproduction 
attempt or something. My experience is that all the logging makes actual 
debugging that you do on a productions systems logs much more difficult and I 
can't think of a case where I have used this info from production logs.

> Visit Solr logging, it's too noisy.
> ---
>
> Key: SOLR-11934
> URL: https://issues.apache.org/jira/browse/SOLR-11934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> I think we have way too much INFO level logging. Or, perhaps more correctly, 
> Solr logging needs to be examined and messages logged at an appropriate level.
> We log every update at an INFO level for instance. But I think we log LIR at 
> INFO as well. As a sysadmin I don't care to have my logs polluted with a 
> message for every update, but if I'm trying to keep my system healthy I want 
> to see LIR messages and try to understand why.
> Plus, in large installations logging at INFO level is creating a _LOT_ of 
> files.
> What I want to discuss on this JIRA is
> 1> What kinds of messages do we want log at WARN, INFO, DEBUG, and TRACE 
> levels?
> 2> Who's the audience at each level? For a running system that's functioning, 
> sysops folks would really like WARN messages that mean something need 
> attention for instance. If I'm troubleshooting should I turn on INFO? DEBUG? 
> TRACE?
> So let's say we get some kind of agreement as to the above. Then I propose 
> three things
> 1> Someone (and probably me but all help gratefully accepted) needs to go 
> through our logging and assign appropriate levels. This will take quite a 
> while, I intend to work on it in small chunks.
> 2> Actually answer whether unnecessary objects are created when something 
> like log.info("whatever {}", someObjectOrMethodCall); is invoked. Is this 
> independent on the logging implementation used? The SLF4J and log4j seem a 
> bit contradictory.
> 3> Maybe regularize log, logger, LOG as variable names, but that's a nit.
> As a tactical approach, I suggest we tag each LoggerFactory.getLogger in 
> files we work on with //SOLR-(whatever number is assigned when I create 
> this). We can remove them all later, but since I expect to approach this 
> piecemeal it'd be nice to keep track of which files have been done already.
> Finally, I really really really don't want to do this all at once. There are 
> 5-6 thousand log messages. Even at 1,000 a week that's 6 weeks, even starting 
> now it would probably span the 7.3 release.
> This will probably be an umbrella issue so we can keep all the commits 
> straight and people can volunteer to "fix the files in core" as a separate 
> piece of work (hint).
> There are several existing JIRAs about logging in general, let's link them in 
> here as well.
> Let the discussion begin!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7976) Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of very large segments

2018-05-06 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465430#comment-16465430
 ] 

Erick Erickson commented on LUCENE-7976:


[~simonw][~mikemccand] So I'm finally thinking about the tests. Simon's totally 
right, I really hadn't been thinking about tests yet, but now that he prompted 
me it's, well, obvious that there are some that can be written to test things 
like respecting max segment size by default etc...

Anyway, since I don't know what documents are in what segments, I can't really 
predict some things, like which specific segments should be merged under 
various conditions.

I see two approaches:
1> delete documents from specific segments. I'm guessing this is just getting 
terms(field) from a leaf reader and enumerating?

2> Just delete some random documents, examine the segments before and after a 
forceMerge or expungeDeletes with various parameters to see if my expectations 
are met.

Got any preferences?

Oh, and the test failures were because I'd missed a check and I've incorporated 
the rest of Simon's comments, no new patch until tests.

> Make TieredMergePolicy respect maxSegmentSizeMB and allow singleton merges of 
> very large segments
> -
>
> Key: LUCENE-7976
> URL: https://issues.apache.org/jira/browse/LUCENE-7976
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Attachments: LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, 
> LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch, LUCENE-7976.patch
>
>
> We're seeing situations "in the wild" where there are very large indexes (on 
> disk) handled quite easily in a single Lucene index. This is particularly 
> true as features like docValues move data into MMapDirectory space. The 
> current TMP algorithm allows on the order of 50% deleted documents as per a 
> dev list conversation with Mike McCandless (and his blog here:  
> https://www.elastic.co/blog/lucenes-handling-of-deleted-documents).
> Especially in the current era of very large indexes in aggregate, (think many 
> TB) solutions like "you need to distribute your collection over more shards" 
> become very costly. Additionally, the tempting "optimize" button exacerbates 
> the issue since once you form, say, a 100G segment (by 
> optimizing/forceMerging) it is not eligible for merging until 97.5G of the 
> docs in it are deleted (current default 5G max segment size).
> The proposal here would be to add a new parameter to TMP, something like 
>  (no, that's not serious name, suggestions 
> welcome) which would default to 100 (or the same behavior we have now).
> So if I set this parameter to, say, 20%, and the max segment size stays at 
> 5G, the following would happen when segments were selected for merging:
> > any segment with > 20% deleted documents would be merged or rewritten NO 
> > MATTER HOW LARGE. There are two cases,
> >> the segment has < 5G "live" docs. In that case it would be merged with 
> >> smaller segments to bring the resulting segment up to 5G. If no smaller 
> >> segments exist, it would just be rewritten
> >> The segment has > 5G "live" docs (the result of a forceMerge or optimize). 
> >> It would be rewritten into a single segment removing all deleted docs no 
> >> matter how big it is to start. The 100G example above would be rewritten 
> >> to an 80G segment for instance.
> Of course this would lead to potentially much more I/O which is why the 
> default would be the same behavior we see now. As it stands now, though, 
> there's no way to recover from an optimize/forceMerge except to re-index from 
> scratch. We routinely see 200G-300G Lucene indexes at this point "in the 
> wild" with 10s of  shards replicated 3 or more times. And that doesn't even 
> include having these over HDFS.
> Alternatives welcome! Something like the above seems minimally invasive. A 
> new merge policy is certainly an alternative.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2520 - Still Unstable

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2520/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode

Error Message:
unexpected DELETENODE status: 
{responseHeader={status=0,QTime=6},status={state=notfound,msg=Did not find 
[search_rate_trigger3/5c141f2bece6acT8rnyj8qr0ykxbteo80adbavnf/0] in any tasks 
queue}}

Stack Trace:
java.lang.AssertionError: unexpected DELETENODE status: 
{responseHeader={status=0,QTime=6},status={state=notfound,msg=Did not find 
[search_rate_trigger3/5c141f2bece6acT8rnyj8qr0ykxbteo80adbavnf/0] in any tasks 
queue}}
at 
__randomizedtesting.SeedInfo.seed([3765A8C081EB8957:15F76642B621062A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.lambda$testDeleteNode$6(SearchRateTriggerIntegrationTest.java:668)
at java.util.ArrayList.forEach(ArrayList.java:1257)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest.testDeleteNode(SearchRateTriggerIntegrationTest.java:660)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+5) - Build # 7305 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7305/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseParallelGC

25 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testExtendedResultsCount

Error Message:
Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.SpellCheckComponentTest_45B8DBB9F29214E3-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@2f596113) still has 
pending deleted files; cannot initialize IndexWriter

Stack Trace:
java.lang.IllegalArgumentException: Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.SpellCheckComponentTest_45B8DBB9F29214E3-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@2f596113) still has 
pending deleted files; cannot initialize IndexWriter
at 
__randomizedtesting.SeedInfo.seed([45B8DBB9F29214E3:7B8CF400D78F8104]:0)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:699)
at 
org.apache.lucene.search.spell.SpellChecker.clearIndex(SpellChecker.java:455)
at 
org.apache.solr.spelling.IndexBasedSpellChecker.build(IndexBasedSpellChecker.java:87)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:128)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2510)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:982)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:951)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testExtendedResultsCount(SpellCheckComponentTest.java:135)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
  

[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465408#comment-16465408
 ] 

Mark Miller commented on SOLR-12297:


This is all very rough and in progress - only the client itself will come in 
soon - but for anyone interested the current WIP can be seen here: 
https://github.com/markrmiller/lucene-solr/commit/f1134ee6581ffd11aea6c1413d0f4375aa8406d9

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 586 - Still Unstable

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/586/

[...truncated 71 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2519/consoleText

[repro] Revision: 0922e58c2c0867815d34c887b182754764cfaa4f

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testSplitIntegration -Dtests.seed=CE753950030B9D30 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=IndexSizeTriggerTest 
-Dtests.method=testMergeIntegration -Dtests.seed=CE753950030B9D30 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=MaxSizeAutoCommitTest 
-Dtests.method=endToEndTest -Dtests.seed=CE753950030B9D30 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=vi -Dtests.timezone=PST8PDT 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
5fc725154001c6283315802e1a2193d51d00f9aa
[repro] git fetch
[repro] git checkout 0922e58c2c0867815d34c887b182754764cfaa4f

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro]   MaxSizeAutoCommitTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.IndexSizeTriggerTest|*.MaxSizeAutoCommitTest" 
-Dtests.showOutput=onerror  -Dtests.seed=CE753950030B9D30 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=ro -Dtests.timezone=America/Nome 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[...truncated 20512 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: org.apache.solr.update.MaxSizeAutoCommitTest
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of master
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.seed=CE753950030B9D30 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=ro -Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 9526 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest

[repro] Re-testing 100% failures at the tip of master without a seed
[repro] ant clean

[...truncated 8 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   IndexSizeTriggerTest
[repro] ant compile-test

[...truncated 3298 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.IndexSizeTriggerTest" -Dtests.showOutput=onerror  
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=ro 
-Dtests.timezone=America/Nome -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 6491 lines...]
[repro] Setting last failure code to 256

[repro] Failures at the tip of master without a seed:
[repro]   5/5 failed: org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest
[repro] git checkout 5fc725154001c6283315802e1a2193d51d00f9aa

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-Tests-7.x - Build # 598 - Still Unstable

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/598/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/35)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10008_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10007_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1525770938951915550", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10008_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10007_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1525770938980147350",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10007_solr",   
"base_url":"http://127.0.0.1:10007/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10008_solr",   
"base_url":"http://127.0.0.1:10008/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1525770938977806850",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10008_solr",   
"base_url":"http://127.0.0.1:10008/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10007_solr",   
"base_url":"http://127.0.0.1:10007/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/35)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10008_solr",
  "state":"active",
  "type":"NRT",
  "SEARCHER.searcher.numDocs":11},

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 1860 - Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1860/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([809E1CF325E0DFB7:E3552A71BC2FAC9A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:

[jira] [Resolved] (SOLR-12293) Updates need to use their own connection pool to maintain connection reuse and prevent spurious recoveries.

2018-05-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-12293.

   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

> Updates need to use their own connection pool to maintain connection reuse 
> and prevent spurious recoveries.
> ---
>
> Key: SOLR-12293
> URL: https://issues.apache.org/jira/browse/SOLR-12293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12293.patch
>
>
> Currently the pool is shared too broadly - for example during replication we 
> don't guarantee we read the full streams when downloading index files and we 
> don't necessarily want to, emptying the stream for a huge file due to error 
> or abort is too expensive. We can't have these connections pollute the update 
> connection pool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Create a good SolrClient for SolrCloud paving the way for async requests, HTTP2, multiplexing, and the latest & greatest Jetty features.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465363#comment-16465363
 ] 

Mark Miller commented on SOLR-12297:


I'm sure that was for some special case. Http2 is never going to be a strong 
enough wind for a Java version change in the project. It turns out, as 
mentioned above, we don't need Java 9 for good out of the box SSL or protocol 
negotiation anyway though.

I've got most of this working. The main things to finish are a few places where 
low level Apache HttpClient has been used, as well as stuff that uses 
HttpClientUtil stuff to do advanced configuration or callback injection. Most 
things and tests are working though.

I'm not putting that in any time soon though. I'll push that to a branch and 
when the new client is cleaned up and a little nicer, focus on putting that in 
first. Then over time we can bring in the rest of the branch.

> Create a good SolrClient for SolrCloud paving the way for async requests, 
> HTTP2, multiplexing, and the latest & greatest Jetty features.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12293) Updates need to use their own connection pool to maintain connection reuse and prevent spurious recoveries.

2018-05-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465349#comment-16465349
 ] 

ASF subversion and git services commented on SOLR-12293:


Commit b72af046c5bd04eec4e84700a2ee20ab5a833e39 in lucene-solr's branch 
refs/heads/branch_7x from [~mark.mil...@oblivion.ch]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b72af04 ]

SOLR-12293: Updates need to use their own connection pool to maintain 
connection reuse and prevent spurious recoveries.


> Updates need to use their own connection pool to maintain connection reuse 
> and prevent spurious recoveries.
> ---
>
> Key: SOLR-12293
> URL: https://issues.apache.org/jira/browse/SOLR-12293
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: SOLR-12293.patch
>
>
> Currently the pool is shared too broadly - for example during replication we 
> don't guarantee we read the full streams when downloading index files and we 
> don't necessarily want to, emptying the stream for a huge file due to error 
> or abort is too expensive. We can't have these connections pollute the update 
> connection pool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.3 - Build # 22 - Failure

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.3/22/

No tests ran.

Build Log:
[...truncated 30126 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist
 [copy] Copying 491 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 230 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] Java 9 JAVA_HOME=/home/jenkins/tools/java/latest1.9
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.3 MB in 0.01 sec (21.0 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.3.1-src.tgz...
   [smoker] 32.0 MB in 0.04 sec (872.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.1.tgz...
   [smoker] 73.4 MB in 0.08 sec (865.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.3.1.zip...
   [smoker] 83.9 MB in 0.11 sec (760.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.1.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] test demo with 9...
   [smoker]   got 6300 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.3.1-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker] run tests w/ Java 9 and testArgs='-Dtests.badapples=false 
-Dtests.slow=false'...
   [smoker] test demo with 9...
   [smoker]   got 217 hits for query "lucene"
   [smoker] checkindex with 9...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.3 MB in 0.01 sec (28.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.3.1-src.tgz...
   [smoker] 55.5 MB in 0.64 sec (86.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.1.tgz...
   [smoker] 154.6 MB in 0.48 sec (319.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.3.1.zip...
   [smoker] 155.6 MB in 1.08 sec (143.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.3.1.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.3.1.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.3/lucene/build/smokeTestRelease/tmp/unpack/solr-7.3.1-java8
   [smoker] *** 

[jira] [Resolved] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller resolved SOLR-12290.

   Resolution: Fixed
Fix Version/s: (was: 7.4)

I'm not going to backport this to 7x, I've had enough silliness on this issue. 
If anyone actually understood it, this wouldn't have turned into a bunch of 
silly statements and threats.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1845 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1845/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

9 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([7435A211BBE19805:278CE0A159F00DFF]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testTrigger

Error Message:
number of ops expected:<2> but was:<1>

Stack Trace:
java.lang.AssertionError: number of ops expected:<2> but was:<1>
at 

[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465335#comment-16465335
 ] 

Mark Miller commented on SOLR-12290:


bq. See my comment. But some things have to be said. Introducing bad 
programming practises, just because of a special case or bug in some component 
(Jetty) is not a good idea. I stop talking here, all is said.

We didn't introduce bad coding practices and your opinion on how Jetty handles 
this has little to do with it. The original patch didn't handle ContentStream 
stuff right when working around assert errors. You are supposed to point out 
bugs. Being an impatient jerk is an unnecessary part of the process.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465334#comment-16465334
 ] 

Mark Miller commented on SOLR-12290:


This wasn't about coding practices - it's simply about the ContentStream API. 
It can return a stream from any source and so streams that come from that API 
must be closed. We are still still not closing servlet streams where we don't 
have to. In some cases we have because the user of the API can't discern where 
the stream came from. This wasn't a coding practice issue, it was a bug.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465331#comment-16465331
 ] 

David Smiley commented on SOLR-12290:
-

Last commit to partially revert looks good to me.  I agree with Uwe's sentiment 
about standard coding practices – if you obtain the stream, close the stream.  
I should have thought of this earlier in my review.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12313) TestInjection#waitForInSyncWithLeader needs improvement.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465327#comment-16465327
 ] 

Mark Miller commented on SOLR-12313:


bq. Okay, I see, the uncommitted changes check

Hmm, I think I still see it causes long delays when it should not sometimes. I 
won't do anything here right away, but something needs to change.

> TestInjection#waitForInSyncWithLeader needs improvement.
> 
>
> Key: SOLR-12313
> URL: https://issues.apache.org/jira/browse/SOLR-12313
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>
> This really should have some doc for why it would be used.
> I also think it causes BasicDistributedZkTest to take forever for sometimes 
> and perhaps other tests?
> I think checking for uncommitted data is probably a race condition and should 
> be removed.
> Checking index versions should follow the rules that replication does - if 
> the slave is higher than the leader, it's in sync, being equal is not 
> required. If it's expected for a test it should be a specific test that 
> fails. This just introduces massive delays.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465326#comment-16465326
 ] 

David Smiley commented on SOLR-8207:


{quote}RED for >80% full disk or >80% CPU, and orange for >50%. Wdyt?
{quote}
+1

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465301#comment-16465301
 ] 

Lucene/Solr QA commented on SOLR-12303:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate ref guide {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}117m  0s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}135m 46s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.IndexSizeTriggerTest |
|   | solr.cloud.autoscaling.SearchRateTriggerIntegrationTest |
|   | solr.cloud.TestRandomFlRTGCloud |
|   | solr.response.transform.TestSubQueryTransformerDistrib |
|   | solr.cloud.autoscaling.sim.TestLargeCluster |
|   | solr.cloud.autoscaling.SearchRateTriggerTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12303 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922182/SOLR-12303.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  validaterefguide  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 5fc7251 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/83/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/83/testReport/ |
| modules | C: solr solr/core solr/solr-ref-guide U: solr |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/83/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_162) - Build # 21967 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21967/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([83789D35CC9EAEF:6BFCBF51C50699C2]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14476 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
   

[jira] [Assigned] (SOLR-7767) Zookeeper Ensemble Admin UI

2018-05-06 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-7767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-7767:
-

Assignee: Jan Høydahl

> Zookeeper Ensemble Admin UI
> ---
>
> Key: SOLR-7767
> URL: https://issues.apache.org/jira/browse/SOLR-7767
> Project: Solr
>  Issue Type: New Feature
>  Components: Admin UI, SolrCloud
>Reporter: Aniket Khare
>Assignee: Jan Høydahl
>Priority: Major
>
> For SolrCloud mode can we have the functionality to display the live nodes 
> from the zookeeper ensemble. So that user can easily get to know if any of 
> zookeeper instance is down or having any other issue.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk1.8.0_144) - Build # 579 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/579/
Java: 64bit/jdk1.8.0_144 -XX:-UseCompressedOops -XX:+UseG1GC

19 tests failed.
FAILED:  
org.apache.solr.handler.component.SpellCheckComponentTest.testExtendedResultsCount

Error Message:
Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.SpellCheckComponentTest_1E051B6E7AAA5B14-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@409a8c97) still has 
pending deleted files; cannot initialize IndexWriter

Stack Trace:
java.lang.IllegalArgumentException: Directory 
(MMapDirectory@C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.handler.component.SpellCheckComponentTest_1E051B6E7AAA5B14-002\init-core-data-001\spellchecker1
 lockFactory=org.apache.lucene.store.NativeFSLockFactory@409a8c97) still has 
pending deleted files; cannot initialize IndexWriter
at 
__randomizedtesting.SeedInfo.seed([1E051B6E7AAA5B14:203134D75FB7CEF3]:0)
at org.apache.lucene.index.IndexWriter.(IndexWriter.java:699)
at 
org.apache.lucene.search.spell.SpellChecker.clearIndex(SpellChecker.java:455)
at 
org.apache.solr.spelling.IndexBasedSpellChecker.build(IndexBasedSpellChecker.java:87)
at 
org.apache.solr.handler.component.SpellCheckComponent.prepare(SpellCheckComponent.java:128)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2510)
at org.apache.solr.util.TestHarness.query(TestHarness.java:337)
at org.apache.solr.util.TestHarness.query(TestHarness.java:319)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:982)
at org.apache.solr.SolrTestCaseJ4.assertJQ(SolrTestCaseJ4.java:951)
at 
org.apache.solr.handler.component.SpellCheckComponentTest.testExtendedResultsCount(SpellCheckComponentTest.java:135)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Comment Edited] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465281#comment-16465281
 ] 

Jan Høydahl edited comment on SOLR-8207 at 5/6/18 8:19 PM:
---

{quote}I'm not sure how the "last 5 min stats" was calculated here in the patch
{quote}
The patch prints "Minor: 0.17/5m", which is "per 5 min", not *last* 5 min. But 
since this is since JVM start which could be months ago, this value is not very 
useful at this time.
{quote}it's pretty simple to create a {{Meter}} or a {{Timer}} for each of 
these GC beans and register them as new metrics...
{quote}
Great! My proposal is then to remove the GC column for now and instead create a 
new Jira (SOLR-12318) which adds some new 1-, 5-, 15-min avg metrics for use in 
a new GC column.


was (Author: janhoy):
{quote}I'm not sure how the "last 5 min stats" was calculated here in the patch
{quote}
The patch prints "Minor: 0.17/5m", which is "per 5 min", not *last* 5 min. But 
since this is since JVM start which could be months ago, this value is not very 
useful at this time.
{quote}it's pretty simple to create a {{Meter}} or a {{Timer}} for each of 
these GC beans and register them as new metrics...
{quote}
Great! My proposal is then to remove the GC column for now and instead create a 
new Jira which adds some new 1-, 5-, 15-min avg metrics for use in a new GC 
column.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12318) Add new 1, 5 and 15min average GC metrics

2018-05-06 Thread JIRA
Jan Høydahl created SOLR-12318:
--

 Summary: Add new 1, 5 and 15min average GC metrics
 Key: SOLR-12318
 URL: https://issues.apache.org/jira/browse/SOLR-12318
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Reporter: Jan Høydahl


Spinoff from SOLR-8207.

Add new metrics for 1, 5, 15min average GC metrics and then add a GC column to 
the Cluster/Nodes view which shows an average of recent GC activity (not true 
time series graph but an average).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465281#comment-16465281
 ] 

Jan Høydahl commented on SOLR-8207:
---

{quote}I'm not sure how the "last 5 min stats" was calculated here in the patch
{quote}
The patch prints "Minor: 0.17/5m", which is "per 5 min", not *last* 5 min. But 
since this is since JVM start which could be months ago, this value is not very 
useful at this time.
{quote}it's pretty simple to create a {{Meter}} or a {{Timer}} for each of 
these GC beans and register them as new metrics...
{quote}
Great! My proposal is then to remove the GC column for now and instead create a 
new Jira which adds some new 1-, 5-, 15-min avg metrics for use in a new GC 
column.

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465280#comment-16465280
 ] 

Uwe Schindler commented on SOLR-12290:
--

See my comment. But some things have to be said. Introducing bad programming 
practises, just because of a special case or bug in some component (Jetty) is 
not a good idea. I stop talking here, all is said.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465278#comment-16465278
 ] 

Jan Høydahl commented on SOLR-8207:
---

Other plans on my list
 * Limit size of metrics fetched, i.e. ask for only those we need, will be 
important for clusters with many nodes
 ** Here is a suggested prefix filter that seems to work well: 
[http://34.242.41.243:9000/solr/admin/metrics?prefix=CONTAINER.fs,org.eclipse.jetty.server.handler.DefaultHandler.get-requests,gc.,INDEX.sizeInBytes,SEARCHER.searcher.numDocs,SEARCHER.searcher.deletedDocs,SEARCHER.searcher.warmupTime]
 
 * Introduce paging and fetch info for at most 10 servers/nodes at a time
 * Add a filter search box to filter on node name (could also filter on 
collection)

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465277#comment-16465277
 ] 

Mark Miller commented on SOLR-12290:


bq. Oh thanks, that was my patch 

Your welcome. If you insist on behaving like Robert, I'm leaving Solr like I 
left Lucene. There is no reason for this garbage.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465276#comment-16465276
 ] 

Uwe Schindler edited comment on SOLR-12290 at 5/6/18 7:50 PM:
--

Sorry, I am a bit worried today. It's too hot here and it was the yonly day 
where I fixed a serious security issue caused by me 5 years ago and was really 
annoyed about tests failing all day.

You said "there is no need to close streams if closing is a no-op". I still 
argue that this is the wrong way do do it. If stuff like Jetty needs special 
handling on closing, it should be done top-level. If downstream code gets a 
stream, it should do try-with-resources.

I am happy with your code in SolrDispatchFilter and the wrapper around the 
ServletXxxStreams. But I don't think we should make users forcefully remove 
try-with-resources blocks (and cause a warning in Eclipse), just because some 
specific implementation of a stream needs special handling. So I'd put all 
special case only in SolrDispatchFilter and whenever a user gets an input 
stream/outputstream further down the code it MUST close it. That's just a fact 
of good programming practise. A method gets a stream does something with it and 
closes it. Solr (especially in tests) is already full of missing closes, so we 
should not add more. And because of that I am heavily arguing. It was not 
against you, I was just bored about the sometimes horrible code quality of solr 
and its tests and a commit that made the code quality of some parts against all 
programming standards (streams have to be closed after usage). One reason that 
I try to avoid fixing bugs in Solr, unless they were caused by me or have 
something to do with XML handling (because that's one of my beloved parts of 
code - I love XML).

I can confirm the tests now pass on Windows. So file leaks with uploaded files 
or other types of content streams are solved. Thanks, but I have a bad feeling 
now about one more horrible anti-feature of solr.


was (Author: thetaphi):
Sorry, I am a bit worried today. It's too hot here and it was the yonly day 
where I fixed a serious security issue caused by me 5 years ago and was really 
annoyed about tests failing all day.

You said "there is no need to close streams if closing is a no-op". I still 
argue that this is the wrong way do do it. If stuff like Jetty needs special 
handling on closing, it should be done top-level. If a user gets a

I am happy with your code in SolrDispatchFilter and the wrapper around the 
ServletXxxStreams. But I don't think we should make users forcefully remove 
try-with-resources blocks (and cause a warning in Eclipse), just because some 
specific implementation of a stream needs special handling. So I'd put all 
special case only in SolrDispatchFilter and whenever a user gets an input 
stream/outputstream further down the code it MUST close it. That's just a fact 
of good programming practise. A method gets a stream does something with it and 
closes it. Solr (especially in tests) is already full of missing closes, so we 
should not add more. And because of that I am heavily arguing. It was not 
against you, I was just bored about the sometimes horrible code quality of solr 
and its tests and a commit that made the code quality of some parts against all 
programming standards (streams have to be closed after usage). One reason that 
I try to avoid fixing bugs in Solr, unless they were caused by me or have 
something to do with XML handling (because that's one of my beloved parts of 
code - I love XML).

I can confirm the tests now pass on Windows. So file leaks with uploaded files 
or other types of content streams are solved. Thanks, but I have a bad feeling 
now about one more horrible anti-feature of solr.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should 

[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465276#comment-16465276
 ] 

Uwe Schindler commented on SOLR-12290:
--

Sorry, I am a bit worried today. It's too hot here and it was the yonly day 
where I fixed a serious security issue caused by me 5 years ago and was really 
annoyed about tests failing all day.

You said "there is no need to close streams if closing is a no-op". I still 
argue that this is the wrong way do do it. If stuff like Jetty needs special 
handling on closing, it should be done top-level. If a user gets a

I am happy with your code in SolrDispatchFilter and the wrapper around the 
ServletXxxStreams. But I don't think we should make users forcefully remove 
try-with-resources blocks (and cause a warning in Eclipse), just because some 
specific implementation of a stream needs special handling. So I'd put all 
special case only in SolrDispatchFilter and whenever a user gets an input 
stream/outputstream further down the code it MUST close it. That's just a fact 
of good programming practise. A method gets a stream does something with it and 
closes it. Solr (especially in tests) is already full of missing closes, so we 
should not add more. And because of that I am heavily arguing. It was not 
against you, I was just bored about the sometimes horrible code quality of solr 
and its tests and a commit that made the code quality of some parts against all 
programming standards (streams have to be closed after usage). One reason that 
I try to avoid fixing bugs in Solr, unless they were caused by me or have 
something to do with XML handling (because that's one of my beloved parts of 
code - I love XML).

I can confirm the tests now pass on Windows. So file leaks with uploaded files 
or other types of content streams are solved. Thanks, but I have a bad feeling 
now about one more horrible anti-feature of solr.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465271#comment-16465271
 ] 

Mark Miller commented on SOLR-12290:


You definitely bring a nasty Lucene vibe into the Solr community sometimes. 
When did I say I wasn't willing to fix it? I said the fix was easy and I have 
to run tests. Cool down man.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465269#comment-16465269
 ] 

Uwe Schindler commented on SOLR-12290:
--

Oh thanks, that was my patch :-)

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12209) add Paging Streaming Expression

2018-05-06 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465270#comment-16465270
 ] 

Lucene/Solr QA commented on SOLR-12209:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 45s{color} 
| {color:red} solrj in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m  1s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.client.solrj.io.TestLang |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922165/SOLR-12209.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 0922e58 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/82/artifact/out/patch-unit-solr_solrj.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/82/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/82/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 7.3
>Reporter: mosh
>Priority: Major
> Attachments: 0001-added-skip-and-limit-stream-decorators.patch, 
> SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8297) Add IW#tryUpdateDocValues(Reader, int, Fields...)

2018-05-06 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465268#comment-16465268
 ] 

Michael McCandless commented on LUCENE-8297:


I will look!

> Add IW#tryUpdateDocValues(Reader, int, Fields...)
> -
>
> Key: LUCENE-8297
> URL: https://issues.apache.org/jira/browse/LUCENE-8297
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8297.patch
>
>
> IndexWriter can update doc values for a specific term but this might
> affect all documents containing the term. With tryUpdateDocValues
> users can update doc-values fields for individual documents. This allows
> for instance to soft-delete individual documents.
> The new method shares most of it's code with tryDeleteDocuments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465267#comment-16465267
 ] 

ASF subversion and git services commented on SOLR-12290:


Commit 5fc725154001c6283315802e1a2193d51d00f9aa in lucene-solr's branch 
refs/heads/master from markrmiller
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5fc7251 ]

SOLR-12290: We must close ContentStreams because we don't know the source of 
the inputstream - use a CloseShield to prevent tripping our close assert in 
SolrDispatchFilter.


> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465265#comment-16465265
 ] 

Andrzej Bialecki  commented on SOLR-8207:
-

bq. That were the only numbers I found in current metrics API. I agree that if 
it is possible to get, say, last-15-minutes numbers that would be much better.

This is the information that we get directly from {{GarbageCollectorMXBean}} 
and it consists of a counter (how many times the particular GC algo has run 
since JVM was started) and the cumulative "elapsed time" for the algo. I'm not 
sure how the "last 5 min stats" was calculated here in the patch. No other 
information is currently collected by the metrics API - these are just 
momentary readings (gauges) from the MXBeans that are presented via metrics API.

Having said that, it's pretty simple to create a {{Meter}} or a {{Timer}} for 
each of these GC beans and register them as new metrics - then we will get 1-, 
5- and 15-min averages for each GC, as well as a histogram of timings (which 
can be approximated from deltas in cumulative time).

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have leaders on them", etc.
> Presumably, if we have APIs to support it, we might have a "decommission 
> node" option, that would ensure that no replicas on this node are leaders, 
> and then remove all replicas from the node, ready for it to be removed from 
> the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2519 - Unstable

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2519/

3 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/62)={   
"replicationFactor":"2",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"2",   
"autoAddReplicas":"false",   "nrtReplicas":"2",   "tlogReplicas":"0",   
"autoCreated":"true",   "shards":{ "shard2":{   "replicas":{ 
"core_node3":{   
"core":"testSplitIntegration_collection_shard2_replica_n3",   
"leader":"true",   "SEARCHER.searcher.maxDoc":11,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":11}, "core_node4":{ 
  "core":"testSplitIntegration_collection_shard2_replica_n4",   
"SEARCHER.searcher.maxDoc":11,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":11}},   "range":"0-7fff",   
"state":"active"}, "shard1":{   "stateTimestamp":"1525721545257979550", 
  "replicas":{ "core_node1":{   
"core":"testSplitIntegration_collection_shard1_replica_n1",   
"leader":"true",   "SEARCHER.searcher.maxDoc":14,   
"SEARCHER.searcher.deletedDocs":0,   "INDEX.sizeInBytes":1,   
"node_name":"127.0.0.1:10003_solr",   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":14}, "core_node2":{ 
  "core":"testSplitIntegration_collection_shard1_replica_n2",   
"SEARCHER.searcher.maxDoc":14,   "SEARCHER.searcher.deletedDocs":0, 
  "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",  
 "state":"active",   "type":"NRT",   
"SEARCHER.searcher.numDocs":14}},   "range":"8000-",   
"state":"inactive"}, "shard1_1":{   "parent":"shard1",   
"stateTimestamp":"1525721545466867100",   "range":"c000-",  
 "state":"active",   "replicas":{ "core_node10":{   
"leader":"true",   
"core":"testSplitIntegration_collection_shard1_1_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",   
"base_url":"http://127.0.0.1:10004/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node9":{   
"core":"testSplitIntegration_collection_shard1_1_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}}}, "shard1_0":{  
 "parent":"shard1",   "stateTimestamp":"1525721545466525950",   
"range":"8000-bfff",   "state":"active",   "replicas":{ 
"core_node7":{   "leader":"true",   
"core":"testSplitIntegration_collection_shard1_0_replica0",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10003_solr",   
"base_url":"http://127.0.0.1:10003/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}, 
"core_node8":{   
"core":"testSplitIntegration_collection_shard1_0_replica1",   
"SEARCHER.searcher.maxDoc":7,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":1,   "node_name":"127.0.0.1:10004_solr",   
"base_url":"http://127.0.0.1:10004/solr;,   "state":"active",   
"type":"NRT",   "SEARCHER.searcher.numDocs":7}

Stack Trace:
java.util.concurrent.TimeoutException: last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/62)={
  "replicationFactor":"2",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "shards":{
"shard2":{
  "replicas":{
"core_node3":{
  "core":"testSplitIntegration_collection_shard2_replica_n3",
  "leader":"true",
  "SEARCHER.searcher.maxDoc":11,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":1,
  "node_name":"127.0.0.1:10003_solr",
  "state":"active",
  "type":"NRT",
  

[jira] [Comment Edited] (SOLR-11453) Create separate logger for slow requests

2018-05-06 Thread Ralph Goers (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465259#comment-16465259
 ] 

Ralph Goers edited comment on SOLR-11453 at 5/6/18 7:17 PM:


The Log4j ScriptManager makes the Log4j Configuration and StatusLogger 
available to every script. Other variables are added to the bindings depending 
on when the script will run. In the case of ScriptAppenderSelector only the 
default bindings are available (there is no logEvent when that script is run - 
it happens during logging configuration). If you were looking at the example at 
http://logging.apache.org/log4j/2.x/manual/configuration.html#Scripts, it is 
determining which Pattern to use for a particular log event, so that script is 
called for every log event. You can see what parameters a ScriptFilter is 
passed by looking at 
https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/filter/ScriptFilter.java
 or http://logging.apache.org/log4j/2.x/manual/filters.html#Script.

As far as the best script language goes, the Javascript engine is included in 
the JDK so it is always available, but it doesn't compile to byte code so its 
performance won't be the best. For something that will only execute once at 
configuration that probably doesn't matter, but for something that will execute 
on every log event I would use a scripting language that compiles, such as 
Groovy. 

To get a system property you would just call System.getProperty("property") in 
whatever syntax the script language requires. 


was (Author: ralph.go...@dslextreme.com):
The Log4j ScriptManager makes the Log4j Configuration and StatusLogger 
available to every script. Other variables are added to the bindings depending 
on when the script will run. In the case of ScriptAppenderSelector only the 
default bindings are available (there is no logEvent when that script is run - 
it happens during logging configuration). If you were looking at the example at 
http://logging.apache.org/log4j/2.x/manual/configuration.html#Scripts, it is 
determining which Pattern to use for a particular log event, so that script is 
called for every log event. You can see what parameters a ScriptFilter is 
passed by looking at 
https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/filter/ScriptFilter.java
 or http://logging.apache.org/log4j/2.x/manual/filters.html#Script.

As far as the best script language goes, the Javascript engine is included in 
the JDK so it is always available, but it doesn't compile to byte code so its 
performance won't be the best. For something that will only execute once at 
configuration that probably doesn't matter, but for something that will execute 
on every log event I would use a scripting language that compiles, such as 
Groovy. 


To get a system property you would just call System.getProperty("property") in 
whatever syntax the script language requires. 

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11453) Create separate logger for slow requests

2018-05-06 Thread Ralph Goers (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465259#comment-16465259
 ] 

Ralph Goers commented on SOLR-11453:


The Log4j ScriptManager makes the Log4j Configuration and StatusLogger 
available to every script. Other variables are added to the bindings depending 
on when the script will run. In the case of ScriptAppenderSelector only the 
default bindings are available (there is no logEvent when that script is run - 
it happens during logging configuration). If you were looking at the example at 
http://logging.apache.org/log4j/2.x/manual/configuration.html#Scripts, it is 
determining which Pattern to use for a particular log event, so that script is 
called for every log event. You can see what parameters a ScriptFilter is 
passed by looking at 
https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/filter/ScriptFilter.java
 or http://logging.apache.org/log4j/2.x/manual/filters.html#Script.

As far as the best script language goes, the Javascript engine is included in 
the JDK so it is always available, but it doesn't compile to byte code so its 
performance won't be the best. For something that will only execute once at 
configuration that probably doesn't matter, but for something that will execute 
on every log event I would use a scripting language that compiles, such as 
Groovy. 


To get a system property you would just call System.getProperty("property") in 
whatever syntax the script language requires. 

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465255#comment-16465255
 ] 

Uwe Schindler commented on SOLR-12290:
--

As you are not willing to fix this, should I send you the patch based on your 
current commit?

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465254#comment-16465254
 ] 

Uwe Schindler commented on SOLR-12290:
--

bq. Whether Jetty closed the socket or not, we can't have anyone close these 
stream because we have to make sure they are fully consumed.

I agree that's fine as workaround for the "jetty bug" (it might be one or now, 
I don't want to argue about it - I'm not Robert). I can just say: Tomcat fully 
consumes the stream automatically if a consumer closes it.

My complaint was only: If we have a perfect stream handling inside 
SolrDispatchFilter that handles the stream consuming and prevents closing of 
underlying ServletOutputStream, the code down the line can handle the input 
like a normal stream. And so goes for ContentStreams, which are an abstract 
interface. Here I will behave like "Robert" and cry for revert if you do not 
revert the broken code!

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12307) Stop endless spin java.io.IOException: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /autoscaling.json

2018-05-06 Thread Lucene/Solr QA (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465253#comment-16465253
 ] 

Lucene/Solr QA commented on SOLR-12307:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
38s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 52s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}130m 11s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.IndexSizeTriggerTest |
|   | solr.cloud.api.collections.ShardSplitTest |
|   | solr.cloud.autoscaling.ScheduledMaintenanceTriggerTest |
|   | solr.cloud.autoscaling.sim.TestTriggerIntegration |
|   | solr.cloud.autoscaling.NodeAddedTriggerTest |
|   | solr.cloud.MultiThreadedOCPTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12307 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12921769/SOLR-12307.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 0922e58 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 8 2015 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/81/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/81/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/81/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Stop endless spin java.io.IOException: 
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired for /autoscaling.json 
> -
>
> Key: SOLR-12307
> URL: https://issues.apache.org/jira/browse/SOLR-12307
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Mikhail Khludnev
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12307.patch
>
>
> When ZK expires one loop continue spinning pointlessly that hurts CI so often 
> {code}
>   [junit4]   2>at
> org.apache.solr.client.solrj.cloud.DistribStateManager.getAutoScalingConfig(DistribStateManager.java:83)
> ~[java/:?]
>[junit4]   2>at
> org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:131)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465252#comment-16465252
 ] 

Uwe Schindler commented on SOLR-12290:
--

It's not only CSV handler. It's all code that consumes ContentStream. It's not 
only tests. E.g. if you upload a file to DIH or you upload the CSV file via 
HTTP file upload (and not as part of the stream), you get a file stream.

So please, just close the stream, costs nothing if its a no-op. It has nothing 
to do with tests vs. production! In case of a ContentStream, YOU DO NOT KNOW 
WHAT TYPE OF INPUT STREAM YOU HAVE BEHIND THE ABSTRACT INTERFACE. It can be 
anything, so it has to be closed. In the case of HttpRequestContentStream it's 
a no-op, but consuming code does not need to know this.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465249#comment-16465249
 ] 

Mark Miller commented on SOLR-12290:


Because we no longer have two separate code paths for tests and normal runs, we 
have an assert that lets developers know when their closes are no ops so that 
developers can understand what is actually going on. We don't need a close when 
it won't actually do anything. We should just close where we need to. In cases 
like the CSVHandler we need to.

Whether Jetty closed the socket or not, we can't have anyone close these stream 
because we have to make sure they are fully consumed.

Anyway, it's simple to fix, I'm not going to jam anything in though, I have to 
run tests and what not.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465248#comment-16465248
 ] 

Uwe Schindler commented on SOLR-12290:
--

I think your patch is fine. Really just revert the code that prevents closing 
of ContentStream.getStream() stuff. That makes half of the patch obsolete! So 
it looks like you are doing the wrapping AND preventing closing in solr stuff 
that reads from those ContentStreams? Why do both if the first already solves 
the issue?

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465244#comment-16465244
 ] 

Uwe Schindler commented on SOLR-12290:
--

I have seen. It's fine! I was just really annoyed today while running tests 
locally. All tests that use ContentStreams with file uploads or files break 
horribly on windows (which is a sign for file dexcriptor leaks, so I am glad 
that we run tests on Windows).

As said before: I'd just revert the code where we consume the ContentStream 
subclasses (BlobHandler, CSVHandler, maybe DIH,...) and just add another 
CloseShield in the HttpRequestContentStream.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Mark Miller (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465241#comment-16465241
 ] 

Mark Miller commented on SOLR-12290:


I will look into this Uwe. We are still using a CloseShield. Relax, this is 
only on master and is not a very large change from what we were doing before.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465240#comment-16465240
 ] 

Uwe Schindler commented on SOLR-12290:
--

I think I know how to solve this:
- Revert all stuf that removed close of code that uses ContentStream
- In SolrRequestParsers$HttpRequestContentStream add a CloseShieldInputStream 
in the getStream() method. This should solve the file leaks caused by no longer 
closing streams on files...

This should make the changes in BlobHandler and CSVLoaderBase obsolete - and 
code clean again. It's a no-go to not close streams of unknown origin!

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Windows (64bit/jdk-11-ea+5) - Build # 7304 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7304/
Java: 64bit/jdk-11-ea+5 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

54 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([1C12DDBEA823BA28:259C64FE87DC73D6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration(IndexSizeTriggerTest.java:298)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:841)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 

[JENKINS] Lucene-Solr-7.3-Linux (64bit/jdk-10) - Build # 183 - Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.3-Linux/183/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:36461/sg_xzg/fh/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:36461/sg_xzg/fh/collection1
at 
__randomizedtesting.SeedInfo.seed([98FBFCB8B701BDC8:10AFC36219FDD030]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:895)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:858)
at 
org.apache.solr.client.solrj.SolrClient.deleteByQuery(SolrClient.java:873)
at 
org.apache.solr.BaseDistributedSearchTestCase.del(BaseDistributedSearchTestCase.java:542)
at 
org.apache.solr.TestDistributedSearch.test(TestDistributedSearch.java:1034)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsRepeatStatement.callStatement(BaseDistributedSearchTestCase.java:1019)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465235#comment-16465235
 ] 

Uwe Schindler edited comment on SOLR-12290 at 5/6/18 5:52 PM:
--

This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any streams correctly (which it should), but the ServletStreams are kept 
open by the shield.
The issue we have now is that ContentStreams are not necessarily (like in 
BlobUploadHandler, CSVHandler) coming from the servlet streams. If it is a file 
or a uploaded file, then we HAVE to close the stream.

The reason behind everything here is a bug in Jetty - in contrast to Tomcat and 
all other servlet containers, it closes the socket after you close the servlet 
streams. This is a bug - sorry! Jetty should prevent closing the underlying 
stream stream!

Please revert the current commit. I can help in solving this correctly - 
unfortunately I am on travel next week.


was (Author: thetaphi):
This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any 

[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465237#comment-16465237
 ] 

Uwe Schindler commented on SOLR-12290:
--

I just repeat: We have no a serious file descriptor leak in combination with 
ContentStreams!!! 

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be options of very last resort (requiring a blood sacrifice) or when 
> shutting down.
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465235#comment-16465235
 ] 

Uwe Schindler edited comment on SOLR-12290 at 5/6/18 5:49 PM:
--

This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any streams correctly (which it should), but the ServletStreams are kept 
open by the shield.
The issue we have no is that ContentStreams are not necessarily (like in 
BlobUploadHandler, CSVHandler) coming from the servlet streams. If it is a file 
or a uploaded file, then we HAVE to close the stream.

The reason behind everything here is a bug in Jetty - in contrast to Tomcat and 
all other servlet containers, it closes the socket after you close the servlet 
streams. This is a bug - sorry! Jetty should prevent closing the underlying 
stream stream!

Please revert the current commit. I can help in solving this correctly - 
unfortunately I am on travel next week.


was (Author: thetaphi):
This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any 

[jira] [Comment Edited] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465235#comment-16465235
 ] 

Uwe Schindler edited comment on SOLR-12290 at 5/6/18 5:48 PM:
--

This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
{noformat}

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any streams correctly (which it should), but the ServletStreams are kept 
open by the shield.
The issue we have no is that ContentStreams are not necessarily (like in 
BlobUploadHandler, CSVHandler) coming from the servlet streams. If it is a file 
or a uploaded file, then we HAVE to close the stream.

The reason behind everything here is a bug in Jetty - in contrast to Tomcat and 
all other servlet containers, it closes the socket after you close the servlet 
streams. This is a bug - sorry! Jetty should prevent closing the stream!

Please revert the current commit. I can help in solving this correctly - 
unfortunately I am on travel next week.


was (Author: thetaphi):
This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any streams correctly (which it should), but the 

[jira] [Commented] (SOLR-12290) Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465235#comment-16465235
 ] 

Uwe Schindler commented on SOLR-12290:
--

This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)

I am not sure behind the reasoning of forcefully preventing closing of 
ServletInputStreams - but IMHO, the better way to handle this is (we had this 
discussion already a while ago):

Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this one around. This allows that code anywhere in solr is free to 
close any streams correctly (which it should), but the ServletStreams are kept 
open by the shield.
The issue we have no is that ContentStreams are not necessarily (like in 
BlobUploadHandler, CSVHandler) coming from the servlet streams. If it is a file 
or a uploaded file, then we HAVE to close the stream.

The reason behind everything here is a bug in Jetty - in contrast to Tomcat and 
all other servlet containers, it closes the socket after you close the servlet 
streams. This is a bug - sorry! Jetty should prevent closing the stream!

Please revert the current commit. I can help in solving this correctly - 
unfortunately I am on travel next week.

> Do not close any servlet streams and improve our servlet stream closing 
> prevention code for users and devs.
> ---
>
> Key: SOLR-12290
> URL: https://issues.apache.org/jira/browse/SOLR-12290
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-12290.patch, SOLR-12290.patch, SOLR-12290.patch, 
> SOLR-12290.patch
>
>
> Original Summary:
> When you fetch a file for replication we close the request output stream 
> after writing the file which ruins the connection for reuse.
> We can't close response output streams, we need to reuse these connections. 
> If we do close them, clients are hit with connection problems when they try 
> and reuse the connection from their pool.
> New Summary:
> At some point the above was addressed during refactoring. We should remove 
> these neutered closes and review our close shield code.
> If you are here to track down why this is done:
> Connection reuse requires that we read all streams and do not close them - 
> instead the container itself must manage request and response streams. If we 
> allow them to be closed, not only do we lose some connection reuse, but we 
> can cause spurious client errors that can cause expensive recoveries for no 
> reason. The spec allows us to count on the container to manage streams. It's 
> our job simply to not close them and to always read them fully, from client 
> and server. 
> Java itself can help with always reading the streams fully up to some small 
> default amount of unread stream slack, but that is very dangerous to count 
> on, so we always manually eat up anything on the streams our normal logic 
> ends up not reading for whatever reason.
> We also cannot call abort without ruining the connection or sendError. These 
> should be 

[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-05-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465233#comment-16465233
 ] 

Shawn Heisey commented on LUCENE-7960:
--

Thanks for the clarification.  Should the no-arg constructor go through 
deprecation in 7.x?


> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: lucene-solr:master: SOLR-12290: Do not close any servlet streams and improve our servlet stream closing prevention code for users and devs.

2018-05-06 Thread Uwe Schindler
This commit breaks TestCSVLoader on Windows. It looks like Solr no longer 
closes any content streams that are passed separately to the servlet stream:

   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestCSVLoader 
-Dtests.method=testLiteral -Dtests.seed=B40869AE03F63CBC -Dtests.locale=ms 
-Dtests.timezone=ECT -Dtests.asserts=true -Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.06s | TestCSVLoader.testLiteral <<<
   [junit4]> Throwable #1: java.nio.file.FileSystemException: C:\Users\Uwe 
Schindler\Projects\lucene\trunk-lusolr1\solr\build\solr-core\test\J0\temp\solr.handler.TestCSVLoader_B40869AE03F63CBC-001\TestCSVLoader-006\solr_tmp.csv:
 Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen 
Prozess verwendet wird.
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([B40869AE03F63CBC:B2ACAF677D87833E]:0)
   [junit4]>at 
sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
   [junit4]>at 
sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
   [junit4]>at 
sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
   [junit4]>at 
sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
   [junit4]>at java.nio.file.Files.delete(Files.java:1126)
   [junit4]>at 
org.apache.solr.handler.TestCSVLoader.tearDown(TestCSVLoader.java:64)
   [junit4]>at java.lang.Thread.run(Thread.java:748)

I am not sure behind the reasoning of the whole thing, but IMHO, the better way 
to handle this is:
- Wrap the ServletInput/ServletOutput streams with a CloseShieldXxxxStream and 
only pass this around. This allows that code anywhere in solr is free to close 
any streams correctly, but the ServletStreams are kept open by the shield.

The reason behind everything here is a bug in Jetty. Jetty should prevent 
closing the stream!

I will reopen this issue. We now have a file descriptor leak also on Linux/Mac!

Uwe

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: markrmil...@apache.org 
> Sent: Saturday, May 5, 2018 1:02 AM
> To: comm...@lucene.apache.org
> Subject: lucene-solr:master: SOLR-12290: Do not close any servlet streams
> and improve our servlet stream closing prevention code for users and devs.
> 
> Repository: lucene-solr
> Updated Branches:
>   refs/heads/master ad0ad2ec8 -> 296201055
> 
> 
> SOLR-12290: Do not close any servlet streams and improve our servlet
> stream closing prevention code for users and devs.
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-
> solr/commit/29620105
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/29620105
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/29620105
> 
> Branch: refs/heads/master
> Commit: 296201055f24f01e1610f2fb87aba7fa90b9dda1
> Parents: ad0ad2e
> Author: Mark Miller 
> Authored: Fri May 4 18:02:06 2018 -0500
> Committer: Mark Miller 
> Committed: Fri May 4 18:02:06 2018 -0500
> 
> --
>  solr/CHANGES.txt|   3 +
>  .../org/apache/solr/handler/BlobHandler.java|   6 +-
>  .../apache/solr/handler/ReplicationHandler.java |   6 +-
>  .../solr/handler/loader/CSVLoaderBase.java  |  75 +-
>  .../solr/handler/loader/JavabinLoader.java  |  16 +--
>  .../org/apache/solr/servlet/HttpSolrCall.java   |   6 +-
>  .../apache/solr/servlet/LoadAdminUiServlet.java |  14 +-
>  .../solr/servlet/ServletInputStreamWrapper.java |   2 +-
>  .../servlet/ServletOutputStreamWrapper.java |   2 +-
>  .../apache/solr/servlet/SolrDispatchFilter.java | 139 ---
>  .../apache/solr/servlet/SolrRequestParsers.java |   6 +-
>  11 files changed, 147 insertions(+), 128 deletions(-)
> --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/29620105/solr/CHANGES.txt
> --
> diff --git a/solr/CHANGES.txt b/solr/CHANGES.txt
> index d4c2097..f74e2fd 100644
> --- a/solr/CHANGES.txt
> +++ b/solr/CHANGES.txt
> @@ -206,6 +206,9 @@ Bug Fixes
> 
>  * SOLR-12202: Fix errors in solr-exporter.cmd. (Minoru Osuka via koji)
> 
> +* SOLR-12290: Do not close any servlet streams and improve our servlet
> stream closing prevention code for users
> +  and devs. (Mark Miller)
> +
>  Optimizations
>  --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-
> solr/blob/29620105/solr/core/src/java/org/apache/solr/handler/BlobHandl
> er.java

[jira] [Commented] (SOLR-11453) Create separate logger for slow requests

2018-05-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465231#comment-16465231
 ] 

Shawn Heisey commented on SOLR-11453:
-

[~ralph.go...@dslextreme.com], thanks for the docs.  There is a lot of 
information there, but I'm not sure that there is enough information there for 
*ME* to figure out how to write a script to look at sysprops and determine 
whether certain loggers will be sent to the main file or to their own file(s).

I see in the docs where a list of three parameters is sent to the script, one 
of which is logEvent.  Is there a reference for everything contained within 
those parameters?

Is javascript the recommended language choice?  I'm wondering about its 
performance, mostly.  I don't want to introduce really slow components into the 
logging.  One of the examples on that doc page says javascript, but appears to 
actually include Java code.  If my interpretation of that example is correct, 
does *that* perform well?

If using javascript directly and not including Java code, are sysprops 
available?  If so, how are they accessed?

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11453) Create separate logger for slow requests

2018-05-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465224#comment-16465224
 ] 

Shawn Heisey commented on SOLR-11453:
-

[~varunthacker], I'm going to attach the code patch I was developing, with some 
ideas you can use or not as you desire.

In accordance with some best practices mentioned on SOLR-12286, I removed 
usages of "log.isXxxxEnabled()" methods.  That is the idea I was most 
interested in telling you about.

I noticed the SolrCore class has an empty top-level Javadoc. Which makes 
precommit pass, but doesn't give anyone any information about the class.  
That's a rather glaring omission IMHO.  Adding it is probably out of scope for 
this issue.

One of the warnings I noticed in eclipse is that serialVersionUID  was missing 
from an inner anonymous class, so I had eclipse add that. Also not important 
for this issue.

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11453) Create separate logger for slow requests

2018-05-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11453?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-11453:

Attachment: slowlog-informational.patch

> Create separate logger for slow requests
> 
>
> Key: SOLR-11453
> URL: https://issues.apache.org/jira/browse/SOLR-11453
> Project: Solr
>  Issue Type: Improvement
>  Components: logging
>Affects Versions: 7.0.1
>Reporter: Shawn Heisey
>Assignee: Shawn Heisey
>Priority: Minor
> Attachments: SOLR-11453.patch, SOLR-11453.patch, SOLR-11453.patch, 
> SOLR-11453.patch, slowlog-informational.patch
>
>
> There is some desire on the mailing list to create a separate logfile for 
> slow queries.  Currently it is not possible to do this cleanly, because the 
> WARN level used by slow query logging within the SolrCore class is also used 
> for other events that SolrCore can log.  Those messages would be out of place 
> in a slow query log.  They should typically stay in main solr logfile.
> I propose creating a custom logger for slow queries, similar to what has been 
> set up for request logging.  In the SolrCore class, which is 
> org.apache.solr.core.SolrCore, there is a special logger at 
> org.apache.solr.core.SolrCore.Request.  This is not a real class, just a 
> logger which makes it possible to handle those log messages differently than 
> the rest of Solr's logging.  I propose setting up another custom logger 
> within SolrCore which could be org.apache.solr.core.SolrCore.SlowRequest.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-05-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465221#comment-16465221
 ] 

Robert Muir commented on LUCENE-7960:
-

There is no need to have only one constructor: two many parameters for the 
simple use case.

I already explained my preference as to what they should be:
* NgramWhateverFilter(TokenStream, int)
* NgramWhateverFilter(TokenStream, int, int, boolean)

So remove the no-arg constructor, which means there is no need for any default 
min/max.
It is also important that the factory match this. Whatever parameters are 
mandatory for the tokenfilter also needs to be mandatory in the factory, too. I 
will insist on it.

> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_162) - Build # 21966 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21966/
Java: 64bit/jdk1.8.0_162 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest: 1) 
Thread[id=402, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-LargeVolumeJettyTest] at java.lang.Thread.sleep(Native 
Method) at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.lang.Thread.run(Thread.java:748)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest: 
   1) Thread[id=402, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-LargeVolumeJettyTest]
at java.lang.Thread.sleep(Native Method)
at 
org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
at java.lang.Thread.run(Thread.java:748)
at __randomizedtesting.SeedInfo.seed([9D443ADBDC2E096A]:0)


FAILED:  
org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest.testMultiThreaded

Error Message:
Captured an uncaught exception in thread: Thread[id=401, name=DocThread-2, 
state=RUNNABLE, group=TGRP-LargeVolumeJettyTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=401, name=DocThread-2, state=RUNNABLE, 
group=TGRP-LargeVolumeJettyTest]
Caused by: java.lang.AssertionError: DocThread-2---Error from server at 
http://127.0.0.1:41621/solr/collection1: Exception writing document id T2:67 to 
the index; possible analysis error.
at __randomizedtesting.SeedInfo.seed([9D443ADBDC2E096A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.client.solrj.LargeVolumeTestBase$DocThread.run(LargeVolumeTestBase.java:128)




Build Log:
[...truncated 15996 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.embedded.LargeVolumeJettyTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-solrj/test/J2/temp/solr.client.solrj.embedded.LargeVolumeJettyTest_9D443ADBDC2E096A-001/init-core-data-001
   [junit4]   2> 26799 WARN  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=4 numCloses=4
   [junit4]   2> 26799 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 26800 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, value=NaN, ssl=NaN, clientAuth=NaN)
   [junit4]   2> 26800 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 26825 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 26825 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore end
   [junit4]   2> 26826 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.a.s.SolrTestCaseJ4 Writing core.properties file to 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-solrj/test/J2/temp/solr.client.solrj.embedded.LargeVolumeJettyTest_9D443ADBDC2E096A-001/tempDir-002/cores/core
   [junit4]   2> 26827 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.Server jetty-9.4.8.v20171121, build timestamp: 
2017-11-21T16:27:37-05:00, git hash: 82b8fb23f757335bb3329d540ce37a2a2615f0a8
   [junit4]   2> 26836 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 26836 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 26836 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.session Scavenging every 60ms
   [junit4]   2> 26836 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@28e17591{/solr,null,AVAILABLE}
   [junit4]   2> 26838 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@32b75c2e{HTTP/1.1,[http/1.1]}{127.0.0.1:41621}
   [junit4]   2> 26838 INFO  
(SUITE-LargeVolumeJettyTest-seed#[9D443ADBDC2E096A]-worker) [] 
o.e.j.s.Server Started 

[jira] [Commented] (SOLR-8207) Modernise cloud tab on Admin UI

2018-05-06 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-8207?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465219#comment-16465219
 ] 

Jan Høydahl commented on SOLR-8207:
---

Thanks all for looking at this and giving such constructive feedback!

[~dsmiley]:
{quote} * the large fonts for CPU/Heap/Disk seem uncalled for; it gives the 
appearance that it's super important and maybe trying to tell me about a 
problem{quote}
I'd love for someone to do better styling of this, but I just think it creates 
a nice visual :) I planned to use colors to warn about dangerous numbers, such 
as RED for >80% full disk or >80% CPU, and orange for >50%. Wdyt?
{quote} * it's not visually evident that clicking stuff will do 
something.{quote}
Good point. Will update the CSS with some underlining, colour change and 
perhaps pointer change. Or perhaps we have a global css style that can be 
applied to all the links here.

[~elyograg]:
{quote}I think that perhaps "refresh" would be a better label than "reload"
{quote}
Definitely, will change that.
{quote}  Having different buttons to reload the collections might be a nice 
addition
{quote}
In the next iteration I plan to add context-sensitive menus to many of these 
cells, so e.g. clicking collection name could have an option to reload, 
clicking a core name could have an option to delete etc.
{quote}What is the percentage on the disk column – free or used?
{quote}
Disk is used %, just like CPU is. If you mouse-over (check the demo link) 
you'll see details of total disk, free etc. It may be an idea to replace the 
disk percentage (or all percentage numbers) with a horizontal bar instead, 
where the bar changes colour to orange/red at critical levels?
{quote}[GC...] Unless request traffic is fairly uniform 24 hours per day, this 
does not seem like a very useful number to me. I do not know if the JVM can 
access GC data for a smaller timeframe. If not, it might not be possible to 
provide better information here.
{quote}
That were the only numbers I found in current metrics API. I agree that if it 
is possible to get, say, last-15-minutes numbers that would be much better. 
[~ab]?
{quote}If the table only includes one request rate, I think I would prefer to 
see the 15 minute rate rather than the 1 minute rate.
{quote}
Yea, just picked something, could very well be that 15min rate makes more 
sense. Can change that :)

[~upayavira]:
{quote}If you click on a node cell, it only opens up the first instance row, 
not all for that node.
{quote}
That's by design, since the {{ng-click}} is on the {{}}. However, it could 
be nice if clicking the "host" cell would expand all node rows on that same 
host. Guess that means that we need to move the ng-click from tr to the 
{{}} level to be able to call different functions? Btw - now if you click a 
collection/core name, the details view gets expanded first and then the link is 
clicked/followed. Is there any way to disable the ng-click handler for the  
inside that cell?
{quote}If the window is too narrow, something odd happens with the text in the 
right-hand column
{quote}
You mean the line wraps? The alternative I guess is to specify a min-width on 
that column?

> Modernise cloud tab on Admin UI
> ---
>
> Key: SOLR-8207
> URL: https://issues.apache.org/jira/browse/SOLR-8207
> Project: Solr
>  Issue Type: Improvement
>  Components: Admin UI
>Affects Versions: 5.3
>Reporter: Upayavira
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: node-compact.png, node-details.png, node-hostcolumn.png, 
> node-toggle-row-numdocs.png, nodes-tab-real.png, nodes-tab.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The various sub-tabs of the "Cloud tab" were designed before anyone was 
> making real use of SolrCloud, and when we didn't really know the use-cases we 
> would need to support. I would argue that, whilst they are pretty (and 
> clever) they aren't really fit for purpose (with the exception of tree view).
> Issues:
> * Radial view doesn't scale beyond a small number of nodes/collections
> * Paging on the graph view is based on collections - so a collection with 
> many replicas won't be subject to pagination
> * The Dump feature is kinda redundant and should be removed
> * There is now a major overlap in functionality with the new Collections tab
> What I'd propose is that we:
>  * promote the tree tab to top level
>  * remove the graph views and the dump tab
>  * add a new Nodes tab
> This nodes tab would complement the collections tab - showing nodes, and 
> their associated replicas/collections. From this view, it would be possible 
> to add/remove replicas and to see the status of nodes. It would also be 
> possible to filter nodes by status: "show me only up nodes", "show me nodes 
> that are in trouble", "show me nodes that have 

[jira] [Commented] (LUCENE-8297) Add IW#tryUpdateDocValues(Reader, int, Fields...)

2018-05-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465206#comment-16465206
 ] 

Simon Willnauer commented on LUCENE-8297:
-

[~mikemccand] can you take a look

> Add IW#tryUpdateDocValues(Reader, int, Fields...)
> -
>
> Key: LUCENE-8297
> URL: https://issues.apache.org/jira/browse/LUCENE-8297
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8297.patch
>
>
> IndexWriter can update doc values for a specific term but this might
> affect all documents containing the term. With tryUpdateDocValues
> users can update doc-values fields for individual documents. This allows
> for instance to soft-delete individual documents.
> The new method shares most of it's code with tryDeleteDocuments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8297) Add IW#tryUpdateDocValues(Reader, int, Fields...)

2018-05-06 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8297?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-8297:

Attachment: LUCENE-8297.patch

> Add IW#tryUpdateDocValues(Reader, int, Fields...)
> -
>
> Key: LUCENE-8297
> URL: https://issues.apache.org/jira/browse/LUCENE-8297
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8297.patch
>
>
> IndexWriter can update doc values for a specific term but this might
> affect all documents containing the term. With tryUpdateDocValues
> users can update doc-values fields for individual documents. This allows
> for instance to soft-delete individual documents.
> The new method shares most of it's code with tryDeleteDocuments.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8297) Add IW#tryUpdateDocValues(Reader, int, Fields...)

2018-05-06 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-8297:
---

 Summary: Add IW#tryUpdateDocValues(Reader, int, Fields...)
 Key: LUCENE-8297
 URL: https://issues.apache.org/jira/browse/LUCENE-8297
 Project: Lucene - Core
  Issue Type: Improvement
Affects Versions: 7.4, master (8.0)
Reporter: Simon Willnauer
 Fix For: 7.4, master (8.0)


IndexWriter can update doc values for a specific term but this might
affect all documents containing the term. With tryUpdateDocValues
users can update doc-values fields for individual documents. This allows
for instance to soft-delete individual documents.
The new method shares most of it's code with tryDeleteDocuments.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-05-06 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465199#comment-16465199
 ] 

Shawn Heisey commented on LUCENE-7960:
--

OK, so we re-engineer to only add a preserveOriginal parameter.  That parameter 
will keep original term when it is outside the min/max range.

For addressing the traps: Is that just removing the no-arg constructor, 
changing the default min/max, both, or was there something else you had in mind?

In master, what constructors do you think should be there?  My bias is to only 
have one, but I don't live and breathe Lucene code like you do, so I trust your 
judgement more than mine.


> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-05-06 Thread Shawn Heisey (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated LUCENE-7960:
-
Summary: NGram filters -- preserve the original token when it is outside 
the min/max size range  (was: NGram filters -- add option to keep short terms)

> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 1857 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1857/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.update.TestInPlaceUpdatesDistrib.test

Error Message:
This doc was supposed to have been deleted, but was: SolrDocument{id=1, 
inplace_updatable_float=2.0, _version_=1599725715653132289, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}

Stack Trace:
java.lang.AssertionError: This doc was supposed to have been deleted, but was: 
SolrDocument{id=1, inplace_updatable_float=2.0, _version_=1599725715653132289, 
inplace_updatable_int_with_default=666, 
inplace_updatable_float_with_default=42.0}
at 
__randomizedtesting.SeedInfo.seed([FAE206B4A1B1AFD5:72B6396E0F4DC22D]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.delayedReorderingFetchesMissingUpdateFromLeaderTest(TestInPlaceUpdatesDistrib.java:972)
at 
org.apache.solr.update.TestInPlaceUpdatesDistrib.test(TestInPlaceUpdatesDistrib.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (LUCENE-8296) PendingDeletes shouldn't write to live docs that it shared

2018-05-06 Thread Simon Willnauer (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465190#comment-16465190
 ] 

Simon Willnauer commented on LUCENE-8296:
-

 I think this is mostly a relict from before I started refactoring 
ReadersAndUpdates. I would love to even go further and down the road make the 
returned Bits instance immutable. I think we should have a very very simple 
base class that FixedBitSet can extend that knows how to read from the array. 
This way we know nobody ever mutates it. Today you can just cast the liveDocs 
from a NRT reader and change it's private instance. I am going to look into 
this unless anybody beats me.

One thing that I am feel is missing is an explicit test that the returned bits 
don't change in subsequent modifications.

+1 to the change!

 

 

> PendingDeletes shouldn't write to live docs that it shared
> --
>
> Key: LUCENE-8296
> URL: https://issues.apache.org/jira/browse/LUCENE-8296
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-8296.patch
>
>
> PendingDeletes has a markAsShared mechanism that allow to make sure that the 
> latest livedocs are not going to receive more updates. But it is not always 
> used, and I was able to verify that in some cases we end up with readers 
> whose live docs disagree with the number of deletes. Even though this might 
> not be causing bugs, it feels dangerous to me so I think we should consider 
> always marking live docs as shared in #getLiveDocs.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7960) NGram filters -- add option to keep short terms

2018-05-06 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465152#comment-16465152
 ] 

Robert Muir commented on LUCENE-7960:
-

Again I want to re-emphasize that anything more complex than a single boolean 
"preserveOriginal" is too much. If someone wants to remove too-short or 
too-long terms they can use LengthFilter for that. There is no need to have 
such complex stuff i the ngram filters itself.

Furthermore I still think we need to address the traps I mentioned about about 
these filters emitting too many tokens already before we then go and add an 
option to make them produce even more...

> NGram filters -- add option to keep short terms
> ---
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1844 - Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1844/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

24 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration

Error Message:
did not finish processing in time

Stack Trace:
java.lang.AssertionError: did not finish processing in time
at 
__randomizedtesting.SeedInfo.seed([41D7144127A9AECC:126E56F1C5B83B36]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testMergeIntegration(IndexSizeTriggerTest.java:404)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.autoscaling.IndexSizeTriggerTest.testSplitIntegration

Error Message:
last state: 
DocCollection(testSplitIntegration_collection//clusterstate.json/54)={   
"replicationFactor":"2",   "pullReplicas":"0",   

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 21965 - Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21965/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseG1GC

6 tests failed.
FAILED:  org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger

Error Message:
expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<1> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([81CB27F5935C0905:E20011770A937A28]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.autoscaling.SearchRateTriggerTest.testTrigger(SearchRateTriggerTest.java:133)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.util.TestSystemIdResolver.testResolving

Error Message:
Expected exception IOException but no exception was thrown


[jira] [Commented] (SOLR-8998) JSON Facet API child roll-ups

2018-05-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465132#comment-16465132
 ] 

ASF subversion and git services commented on SOLR-8998:
---

Commit 709782ac9d50e20da5745aa6fa2351b6b6757b20 in lucene-solr's branch 
refs/heads/branch_7x from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=709782a ]

SOLR-8998: documentation fix.


> JSON Facet API child roll-ups
> -
>
> Key: SOLR-8998
> URL: https://issues.apache.org/jira/browse/SOLR-8998
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-8998-api-doc.patch, SOLR-8998-doc.patch, 
> SOLR-8998.patch, SOLR-8998.patch, SOLR-8998.patch, SOLR_8998.patch, 
> SOLR_8998.patch, SOLR_8998.patch
>
>
> The JSON Facet API currently has the ability to map between parents and 
> children ( see http://yonik.com/solr-nested-objects/ )
> This issue is about adding a true rollup ability where parents would take on 
> derived values from their children.  The most important part (and the most 
> difficult part) will be the external API.
> [~mkhludnev] says
> {quote}
> The bottom line is to introduce {{uniqueBlock(\_root_)}} aggregation, which 
> is supposed to be faster than {{unique(\_root_)}} and purposed for blocked 
> index only. For now it it supports singlevalue string fields, docValues 
> usually make sense.   
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8998) JSON Facet API child roll-ups

2018-05-06 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465131#comment-16465131
 ] 

ASF subversion and git services commented on SOLR-8998:
---

Commit beaf3a47ebe6ad79572bccaeafc2551dc86f19c6 in lucene-solr's branch 
refs/heads/master from [~mkhludnev]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=beaf3a4 ]

SOLR-8998: documentation fix.


> JSON Facet API child roll-ups
> -
>
> Key: SOLR-8998
> URL: https://issues.apache.org/jira/browse/SOLR-8998
> Project: Solr
>  Issue Type: New Feature
>  Components: Facet Module
>Reporter: Yonik Seeley
>Assignee: Mikhail Khludnev
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-8998-api-doc.patch, SOLR-8998-doc.patch, 
> SOLR-8998.patch, SOLR-8998.patch, SOLR-8998.patch, SOLR_8998.patch, 
> SOLR_8998.patch, SOLR_8998.patch
>
>
> The JSON Facet API currently has the ability to map between parents and 
> children ( see http://yonik.com/solr-nested-objects/ )
> This issue is about adding a true rollup ability where parents would take on 
> derived values from their children.  The most important part (and the most 
> difficult part) will be the external API.
> [~mkhludnev] says
> {quote}
> The bottom line is to introduce {{uniqueBlock(\_root_)}} aggregation, which 
> is supposed to be faster than {{unique(\_root_)}} and purposed for blocked 
> index only. For now it it supports singlevalue string fields, docValues 
> usually make sense.   
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Mikhail Khludnev (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465121#comment-16465121
 ] 

Mikhail Khludnev commented on SOLR-12303:
-

Oh.. right, [~munendrasn]! Attached the reproducer see 
{{TestSubQueryTransformerDistrib.testNoSelect()}}.

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Mikhail Khludnev (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12303:

Attachment: SOLR-12303.patch

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12317) Improve EmptyEntityResolver to throw exceptions instead of silently returning an empty input stream

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465116#comment-16465116
 ] 

Uwe Schindler commented on SOLR-12317:
--

We should maybe also rename this class, as it no longer returns an empty 
stream. :-)

> Improve EmptyEntityResolver to throw exceptions instead of silently returning 
> an empty input stream
> ---
>
> Key: SOLR-12317
> URL: https://issues.apache.org/jira/browse/SOLR-12317
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> In the past we always secured all XML parsers used by solr that consumed XML 
> from the network to silently return an empty input stream for all external 
> entities. This was done to not break any client applications at that time.
> Now, 5 years later, we should really simply throw an Exception instead, so 
> user is informed that you cannot pass external entities or xincludes to those 
> endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12317) Improve EmptyEntityResolver to throw exceptions instead of silently returning an empty input stream

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12317:
-
Fix Version/s: master (8.0)
   7.4

> Improve EmptyEntityResolver to throw exceptions instead of silently returning 
> an empty input stream
> ---
>
> Key: SOLR-12317
> URL: https://issues.apache.org/jira/browse/SOLR-12317
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> In the past we always secured all XML parsers used by solr that consumed XML 
> from the network to silently return an empty input stream for all external 
> entities. This was done to not break any client applications at that time.
> Now, 5 years later, we should really simply throw an Exception instead, so 
> user is informed that you cannot pass external entities or xincludes to those 
> endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12317) Improve EmptyEntityResolver to throw exceptions instead of silently returning an empty input stream

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-12317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-12317:
-
Affects Version/s: 7.3

> Improve EmptyEntityResolver to throw exceptions instead of silently returning 
> an empty input stream
> ---
>
> Key: SOLR-12317
> URL: https://issues.apache.org/jira/browse/SOLR-12317
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.3
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> In the past we always secured all XML parsers used by solr that consumed XML 
> from the network to silently return an empty input stream for all external 
> entities. This was done to not break any client applications at that time.
> Now, 5 years later, we should really simply throw an Exception instead, so 
> user is informed that you cannot pass external entities or xincludes to those 
> endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12317) Improve EmptyEntityResolver to throw exceptions instead of silently returning an empty input stream

2018-05-06 Thread Uwe Schindler (JIRA)
Uwe Schindler created SOLR-12317:


 Summary: Improve EmptyEntityResolver to throw exceptions instead 
of silently returning an empty input stream
 Key: SOLR-12317
 URL: https://issues.apache.org/jira/browse/SOLR-12317
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Uwe Schindler
Assignee: Uwe Schindler


In the past we always secured all XML parsers used by solr that consumed XML 
from the network to silently return an empty input stream for all external 
entities. This was done to not break any client applications at that time.

Now, 5 years later, we should really simply throw an Exception instead, so user 
is informed that you cannot pass external entities or xincludes to those 
endpoints.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465107#comment-16465107
 ] 

Munendra S N edited comment on SOLR-12303 at 5/6/18 12:49 PM:
--

[~mkhludnev]
Reason for using *request.getPath()* instead of *qt* is that in case of HTTP 
Requests, there would be cases where *qt* won't be set.
For client library(SolrJ) and test cases *qt* needs to be set to use a 
different handler. For HTTP Requests, it is not required. 
So, In case of HTTP request subquery won't inherit path(as qt need not be 
specified).
Hence, *request.getPath()* needs to be used.
In case of the client libraries, *qt* and *getPath()* would be same. In case of 
HTTP request, it could be different.
Also, whenever *qt* is specified it is used but behavior mentioned in the 
documentation for HTTP request is different.
 *qt* is used for HTTP request if */select* is path and there is no handler 
with */select* 
 [https://wiki.apache.org/solr/CoreQueryParameters]
 
[https://wiki.apache.org/solr/SolrRequestHandler#Old_handleSelect.3Dtrue_Resolution_.28qt_param.29]
https://lucene.apache.org/solr/guide/7_0/major-changes-in-solr-7.html#changes-to-default-behaviors
  - handleSelect change

The path using *request.getPath()*, doesn't handle a case,
 * When */select* is not available and handleSelect=true. it should use *qt* (I 
can work on adding this functionality)


was (Author: munendrasn):
[~mkhludnev]
Reason for using *request.getPath()* instead of *qt* is that in case of HTTP 
Requests, there would be cases where *qt* won't be set.
For client library(SolrJ) and test cases *qt* needs to be set to use a 
different handler. For HTTP Requests, it is not required. 
So, In case of HTTP request subquery won't inherit path(as qt need not be 
specified).
Hence, *request.getPath()* needs to be used.
In case of the client libraries, *qt* and *getPath()* would be same. In case of 
HTTP request, it could be different.
Also, whenever *qt* is specified it is used but behavior mentioned in the 
documentation for HTTP request is different.
 *qt* is used for HTTP request if */select* is path and there is no handler 
with */select* 
 [https://wiki.apache.org/solr/CoreQueryParameters]
 
[https://wiki.apache.org/solr/SolrRequestHandler#Old_handleSelect.3Dtrue_Resolution_.28qt_param.29]
https://lucene.apache.org/solr/guide/7_2/requesthandlers-and-searchcomponents-in-solrconfig.html
  - handleSelect change

The path using *request.getPath()*, doesn't handle a case,
 * When */select* is not available and handleSelect=true. it should use *qt* (I 
can work on adding this functionality)

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12298) Index Full nested document Hierarchy For Queries (umbrella issue)

2018-05-06 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465110#comment-16465110
 ] 

mosh edited comment on SOLR-12298 at 5/6/18 12:44 PM:
--

An URP could be used to add those fields, but that will not prevent the need 
for a new JSON loader, since the current one requires the _childDocument_ to be 
added at each level. This could be prevented only by writing a new JSON loader, 
which will override the 
[parseExtendedFieldValue|https://github.com/apache/lucene-solr/blob/1b760114216fcdfae138a8b37f183a9293c49115/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java#L564]
 function. The overridden function should populate the needed fields 
(__nestLevel__, _nestPath_, _nestParent_),
 since the current JSON loader does not support regular nested JSON. Does this 
warrant the addition of a new URP?

The _nestPath_ field will contain the name of the key at each level, e.g. 
"post.comment" for the child {"a": "b", "post": {"comment" : *

{"reply" : "a"}

*}}, the child solr document ending up as { "reply": "a", "_nestLevel_" : 2, 
"_nestParent_": parentId, "_root_": rootDocId, "_nestPath_": "post.comment" }

adding a "nest" prefix to each special fields sounds like a good way to 
differentiate them from other fields.


was (Author: moshebla):
An URP could be used to add those fields, but that will not prevent the need 
for a new JSON loader, since the current one requires the _childDocument_ to be 
added at each level. This could be prevented only by writing a new JSON loader, 
which will override the 
[parseExtendedFieldValue|https://github.com/apache/lucene-solr/blob/1b760114216fcdfae138a8b37f183a9293c49115/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java#L564]
 function. The overridden function should populate the needed fields 
(_nestLevel_, _nestPath_, _nestParent_),
since the current JSON loader does not support regular nested json.

The _nestPath_ field will contain the name of the key at each level, e.g. 
"post.comment" for the child {"a": "b", "post": {"comment" : *{"reply" : 
"a"}*}}, the child solr document ending up as \{ "reply": "a", "_nestLevel_" : 
2, "_nestParent_": parentId, "_root_": rootDocId, "_nestPath_": "post.comment" }

adding a "nest" prefix to each special fields sounds like a good way to 
differentiate them from other fields.

> Index Full nested document Hierarchy For Queries (umbrella issue)
> -
>
> Key: SOLR-12298
> URL: https://issues.apache.org/jira/browse/SOLR-12298
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> Solr ought to have the ability to index deeply nested objects, while storing 
> the original document hierarchy.
>  Currently the client has to index the child document's full path and level 
> to manually reconstruct the original document structure, since the children 
> are flattened and returned in the reserved "__childDocuments__" key.
> Ideally you could index a nested document, having Solr transparently add the 
> required fields while providing a document transformer to rebuild the 
> original document's hierarchy.
>  
> This issue is an umbrella issue for the particular tasks that will make it 
> all happen – either subtasks or issue linking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12298) Index Full nested document Hierarchy For Queries (umbrella issue)

2018-05-06 Thread mosh (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465110#comment-16465110
 ] 

mosh commented on SOLR-12298:
-

An URP could be used to add those fields, but that will not prevent the need 
for a new JSON loader, since the current one requires the _childDocument_ to be 
added at each level. This could be prevented only by writing a new JSON loader, 
which will override the 
[parseExtendedFieldValue|https://github.com/apache/lucene-solr/blob/1b760114216fcdfae138a8b37f183a9293c49115/solr/core/src/java/org/apache/solr/handler/loader/JsonLoader.java#L564]
 function. The overridden function should populate the needed fields 
(_nestLevel_, _nestPath_, _nestParent_),
since the current JSON loader does not support regular nested json.

The _nestPath_ field will contain the name of the key at each level, e.g. 
"post.comment" for the child {"a": "b", "post": {"comment" : *{"reply" : 
"a"}*}}, the child solr document ending up as \{ "reply": "a", "_nestLevel_" : 
2, "_nestParent_": parentId, "_root_": rootDocId, "_nestPath_": "post.comment" }

adding a "nest" prefix to each special fields sounds like a good way to 
differentiate them from other fields.

> Index Full nested document Hierarchy For Queries (umbrella issue)
> -
>
> Key: SOLR-12298
> URL: https://issues.apache.org/jira/browse/SOLR-12298
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
>
> Solr ought to have the ability to index deeply nested objects, while storing 
> the original document hierarchy.
>  Currently the client has to index the child document's full path and level 
> to manually reconstruct the original document structure, since the children 
> are flattened and returned in the reserved "__childDocuments__" key.
> Ideally you could index a nested document, having Solr transparently add the 
> required fields while providing a document transformer to rebuild the 
> original document's hierarchy.
>  
> This issue is an umbrella issue for the particular tasks that will make it 
> all happen – either subtasks or issue linking.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 585 - Unstable

2018-05-06 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/585/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/212/consoleText

[repro] Revision: dad48603aec715063fdcb71e11fe73599d63c3a2

[repro] Ant options: -DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9
[repro] Repro line:  ant test  -Dtestcase=SystemLogListenerTest 
-Dtests.method=test -Dtests.seed=DC428DD98AE32AE2 -Dtests.multiplier=2 
-Dtests.locale=lv-LV -Dtests.timezone=Asia/Jerusalem -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=DC428DD98AE32AE2 -Dtests.multiplier=2 
-Dtests.locale=es-CO -Dtests.timezone=Africa/Ndjamena -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] Repro line:  ant test  -Dtestcase=DistribJoinFromCollectionTest 
-Dtests.method=testNoScore -Dtests.seed=DC428DD98AE32AE2 -Dtests.multiplier=2 
-Dtests.locale=it-IT -Dtests.timezone=America/Juneau -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
1b760114216fcdfae138a8b37f183a9293c49115
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout dad48603aec715063fdcb71e11fe73599d63c3a2

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   DistribJoinFromCollectionTest
[repro]   SearchRateTriggerTest
[repro]   SystemLogListenerTest
[repro] ant compile-test

[...truncated 3316 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=15 
-Dtests.class="*.DistribJoinFromCollectionTest|*.SearchRateTriggerTest|*.SystemLogListenerTest"
 -Dtests.showOutput=onerror 
-DsmokeTestRelease.java9=/home/jenkins/tools/java/latest1.9 
-Dtests.seed=DC428DD98AE32AE2 -Dtests.multiplier=2 -Dtests.locale=it-IT 
-Dtests.timezone=America/Juneau -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 2102 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DistribJoinFromCollectionTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro]   2/5 failed: org.apache.solr.cloud.autoscaling.SystemLogListenerTest
[repro] git checkout 1b760114216fcdfae138a8b37f183a9293c49115

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-8291:
--
Description: 
It appears that in QueryTemplateManager.java lines 149 and 198 and in 
DOMUtils.java line 204 XML is parsed without disabling external entity 
references (XXE). This is described in 
[http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
listed here: 
[https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]

All recent versions of lucene are affected.

  was:
It appears that in QueryTemplateManager.java lines 149 and 198 and in 
DOMUtils.java line 204 XML is parsed without disabling external entity 
references (XXE). This is described in 
[http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
listed here: 
[https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]

[https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.

All recent versions of lucene are affected.


> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-8291:
--
Priority: Major  (was: Critical)

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-8291:
--
Labels:   (was: security)

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-8291:
--
Fix Version/s: master (8.0)
   7.4

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: security
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465107#comment-16465107
 ] 

Munendra S N edited comment on SOLR-12303 at 5/6/18 12:32 PM:
--

[~mkhludnev]
Reason for using *request.getPath()* instead of *qt* is that in case of HTTP 
Requests, there would be cases where *qt* won't be set.
For client library(SolrJ) and test cases *qt* needs to be set to use a 
different handler. For HTTP Requests, it is not required. 
So, In case of HTTP request subquery won't inherit path(as qt need not be 
specified).
Hence, *request.getPath()* needs to be used.
In case of the client libraries, *qt* and *getPath()* would be same. In case of 
HTTP request, it could be different.
Also, whenever *qt* is specified it is used but behavior mentioned in the 
documentation for HTTP request is different.
 *qt* is used for HTTP request if */select* is path and there is no handler 
with */select* 
 [https://wiki.apache.org/solr/CoreQueryParameters]
 
[https://wiki.apache.org/solr/SolrRequestHandler#Old_handleSelect.3Dtrue_Resolution_.28qt_param.29]
https://lucene.apache.org/solr/guide/7_2/requesthandlers-and-searchcomponents-in-solrconfig.html
  - handleSelect change

The path using *request.getPath()*, doesn't handle a case,
 * When */select* is not available and handleSelect=true. it should use *qt* (I 
can work on adding this functionality)


was (Author: munendrasn):
[~mkhludnev]
Reason for using *request.getPath()* instead of *qt* is that in case of HTTP 
Requests, there would be cases where *qt* won't be set.
For client library(SolrJ) and test cases *qt* needs to be set to use a 
different handler. For HTTP Requests, it is not required. 
So, In case of HTTP request subquery won't inherit path(as qt need not be 
specified).
Hence, *request.getPath()* needs to be used.
In case of the client libraries, *qt* and *getPath()* would be same. In case of 
HTTP request, it could be different.
Also, whenever *qt* is specified it is used but behavior mentioned in the 
documentation for HTTP request is different.
 *qt* is used for HTTP request if */select* is path and there is no handler 
with */select* 
 [https://wiki.apache.org/solr/CoreQueryParameters]
 
[https://wiki.apache.org/solr/SolrRequestHandler#Old_handleSelect.3Dtrue_Resolution_.28qt_param.29]

The path using *request.getPath()*, doesn't handle a case,
 * When */select* is not available and handleSelect=true. it should use *qt* (I 
can work on adding this functionality)

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12303) Subquery Doc transform doesn't inherit path from original request

2018-05-06 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465107#comment-16465107
 ] 

Munendra S N commented on SOLR-12303:
-

[~mkhludnev]
Reason for using *request.getPath()* instead of *qt* is that in case of HTTP 
Requests, there would be cases where *qt* won't be set.
For client library(SolrJ) and test cases *qt* needs to be set to use a 
different handler. For HTTP Requests, it is not required. 
So, In case of HTTP request subquery won't inherit path(as qt need not be 
specified).
Hence, *request.getPath()* needs to be used.
In case of the client libraries, *qt* and *getPath()* would be same. In case of 
HTTP request, it could be different.
Also, whenever *qt* is specified it is used but behavior mentioned in the 
documentation for HTTP request is different.
 *qt* is used for HTTP request if */select* is path and there is no handler 
with */select* 
 [https://wiki.apache.org/solr/CoreQueryParameters]
 
[https://wiki.apache.org/solr/SolrRequestHandler#Old_handleSelect.3Dtrue_Resolution_.28qt_param.29]

The path using *request.getPath()*, doesn't handle a case,
 * When */select* is not available and handleSelect=true. it should use *qt* (I 
can work on adding this functionality)

> Subquery Doc transform doesn't inherit path from original request
> -
>
> Key: SOLR-12303
> URL: https://issues.apache.org/jira/browse/SOLR-12303
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Munendra S N
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12303.patch, SOLR-12303.patch, SOLR-12303.patch, 
> SOLR-12303.patch, SOLR-12303.patch
>
>
> {code:java}
> localhost:8983/solr/k_test/search?sort=score desc,uniqueId 
> desc=AND=json={!parent which=parent_field:true score=max}({!edismax 
> v=$origQuery})=false=uniqueId=score=_children_:[subquery]=uniqueId=false=parent_field&_children_.fl=uniqueId&_children_.fl=score&_children_.rows=3=false&_children_.q={!edismax
>  qf=parentId v=$row.uniqueId}=1
> {code}
> For this request, even though the path is */search*, the subquery request 
> would be fired on handler */select*.
> Subquery request should inherit the parent request handler and there should 
> be an option to override this behavior. (option to override is already 
> available by specifying *qt*)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated LUCENE-8291:
--
Attachment: LUCENE-8291.patch

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: security
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465106#comment-16465106
 ] 

Uwe Schindler commented on LUCENE-8291:
---

Patch removing this class and examples: [^LUCENE-8291.patch] 

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: security
> Attachments: LUCENE-8291.patch
>
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-7.x-Linux (32bit/jdk1.8.0_162) - Build # 32 - Still Unstable!

2018-05-06 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-7.x-Linux/32/
Java: 32bit/jdk1.8.0_162 -client -XX:+UseSerialGC

10 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[737D6CDC2EECE0F4:B7B0B1611CFEA074]}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[737D6CDC2EECE0F4:B7B0B1611CFEA074]}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[737D6CDC2EECE0F4:B7B0B1611CFEA074]}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  
org.apache.lucene.spatial3d.geom.RandomGeoPolygonTest.testCompareBigPolygons 
{seed=[737D6CDC2EECE0F4:B7B0B1611CFEA074]}

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([737D6CDC2EECE0F4]:0)


FAILED:  org.apache.solr.search.TestRealTimeGet.testStressGetRealtime

Error Message:
Captured an uncaught exception in thread: Thread[id=927, name=WRITER3, 
state=RUNNABLE, group=TGRP-TestRealTimeGet]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=927, name=WRITER3, state=RUNNABLE, 
group=TGRP-TestRealTimeGet]
at 
__randomizedtesting.SeedInfo.seed([5B5A67886CF5B028:C1546DEA4A86D38A]:0)
Caused by: java.lang.RuntimeException: org.apache.solr.common.SolrException: 
Exception writing document id 5 to the index; possible analysis error.
at __randomizedtesting.SeedInfo.seed([5B5A67886CF5B028]:0)
at 
org.apache.solr.search.TestRealTimeGet$1.run(TestRealTimeGet.java:706)
Caused by: org.apache.solr.common.SolrException: Exception writing document id 
5 to the index; possible analysis error.
at 
org.apache.solr.update.DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:246)
at 
org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:67)
at 
org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.doLocalAdd(DistributedUpdateProcessor.java:950)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1163)
at 
org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:633)
at 
org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.handleAdds(JsonLoader.java:501)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:145)
at 
org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLoader.java:121)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:84)
at 
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:97)
at 

[jira] [Comment Edited] (LUCENE-8291) Possible security issue when parsing XML documents containing external entity references

2018-05-06 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16465102#comment-16465102
 ] 

Uwe Schindler edited comment on LUCENE-8291 at 5/6/18 12:17 PM:


We will remove this class as it is not really used in Lucene and Solr, it's 
just a convenience class.

In fact it's not really a security issue, because it is just a way for an 
application to use template XML files for the XML query parser where properties 
can be replaced. The XML file is not intended to be loaded from untrusted 
sources. Anybody doing this has misunderstood the whole class anyways and will 
fail to use it. So this looks like just an issue reported by some automated 
code safety testing tool.

For the template manager the use case is: You have an XML/XSL file as a query 
template in your local JAR resources folder and you use properties to replace 
the property placeholders in the XML before passing it to XML query parser. If 
used correctly there is never any external possibility to inject XML. So there 
is no need to fix this. If there is the possibility to pass in an untrusted XML 
file it's the application's fault, not Lucene's.

Nevertheless, as the above functionality can be done outside of Lucene easily; 
so let's remove this class. Its mostly untested and not used in the wild 
(github search).


was (Author: thetaphi):
We will remove this class as it is not really used in Lucene and Solr, it's 
just a convenience class.

In fact it's not really a security issue, because it is just a way for an 
application to use template XML files for the XML query parser where properties 
can be replaced. The XML file is not intended to be loaded from untrusted 
sources. Anybody doing this has misunderstood the whole class anyways and will 
fail to use it. So this looks like just an issue reported by some automated 
code safety testing tool.

For the template manager the use case is: You have an XML/XSL file as a query 
template in your resources folder and you use properties to replace the 
property placeholders in the XML before passing to XML query parser. If used 
correctly there is never any external possibility to inject XML. So there is no 
need to fix this.

Nevertheless, as the above functionality can be done outside of Lucene easily, 
let's remove this class. Its mostly untested and not used in the wild (github 
search).

> Possible security issue when parsing XML documents containing external entity 
> references
> 
>
> Key: LUCENE-8291
> URL: https://issues.apache.org/jira/browse/LUCENE-8291
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/queryparser
>Affects Versions: 7.2.1
>Reporter: Hendrik Saly
>Assignee: Uwe Schindler
>Priority: Critical
>  Labels: security
>
> It appears that in QueryTemplateManager.java lines 149 and 198 and in 
> DOMUtils.java line 204 XML is parsed without disabling external entity 
> references (XXE). This is described in 
> [http://cwe.mitre.org/data/definitions/611.html] and possible mitigations are 
> listed here: 
> [https://www.owasp.org/index.php/XML_External_Entity_(XXE)_Prevention_Cheat_Sheet]
> [https://www.cvedetails.com/cve/CVE-2014-6517/] is also related.
> All recent versions of lucene are affected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >