[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-11-ea+28) - Build # 2647 - Still Unstable!

2018-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2647/
Java: 64bit/jdk-11-ea+28 -XX:-UseCompressedOops -XX:+UseG1GC

2 tests failed.
FAILED:  
org.apache.solr.cloud.rule.RulesTest.testPortRuleInPresenceOfClusterPolicy

Error Message:
Could not find collection : portRuleColl2

Stack Trace:
org.apache.solr.common.SolrException: Could not find collection : portRuleColl2
at 
__randomizedtesting.SeedInfo.seed([93DC4D8C01701B80:2AF4C10CB6A951C8]:0)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:118)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.rule.RulesTest.testPortRuleInPresenceOfClusterPolicy(RulesTest.java:119)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:834)


FAILED:  

Re: [VOTE] Release PyLucene 7.4.0 (rc1)

2018-08-28 Thread Shalin Shekhar Mangar
+1

Built and ran tests on Ubuntu Linux 17.10 with Python 2.7.14 and JDK
1.8.0_181.

On Tue, Aug 28, 2018 at 11:35 PM Andi Vajda  wrote:

>
> The PyLucene 7.4.0 (rc1) release tracking the recent release of
> Apache Lucene 7.4.0 is ready.
>
> A release candidate is available from:
>https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.4.0-rc1/
>
> PyLucene 7.4.0 is built with JCC 3.2 included in these release artifacts.
>
> JCC 3.2 supports Python 3.3+ (in addition to Python 2.3+).
> PyLucene may be built with Python 2 or Python 3.
>
> Please vote to release these artifacts as PyLucene 7.4.0.
> Anyone interested in this release can and should vote !
>
> Thanks !
>
> Andi..
>
> ps: the KEYS file for PyLucene release signing is at:
> https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
> https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS
>
> pps: here is my +1
>


-- 
Regards,
Shalin Shekhar Mangar.


[jira] [Updated] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10697:
-
Attachment: SOLR-10697.patch

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-10697.patch, SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595946#comment-16595946
 ] 

Varun Thacker commented on SOLR-10697:
--

Updated patch with CHANGES entry. I'll let Yetus validate this and then commit 
it tomorrow 

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10697) Improve defaults for maxConnectionsPerHost

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-10697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-10697:
-
Attachment: SOLR-10697.patch

> Improve defaults for maxConnectionsPerHost
> --
>
> Key: SOLR-10697
> URL: https://issues.apache.org/jira/browse/SOLR-10697
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-10697.patch, SOLR-10697.patch
>
>
> Twice recently I've increased 
> {{HttpShardHandlerFactory#maxConnectionsPerHost}} at a client and it helped 
> improve query latencies a lot.
> Should we increase the default to say 100 ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213543737
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestChildDocTransformerHierarchy.java
 ---
@@ -124,10 +124,11 @@ public void testParentFilterLimitJSON() throws 
Exception {
 
 assertJQ(req("q", "type_s:donut",
 "sort", "id asc",
-"fl", "id, type_s, toppings, _nest_path_, [child limit=1]",
+"fl", "id, type_s, lonely, lonelyGrandChild, test_s, test2_s, 
_nest_path_, [child limit=1]",
--- End diff --

Leaving this as a TODO for another day sounds like a decent option.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213541579
  
--- Diff: 
solr/core/src/test/org/apache/solr/response/transform/TestChildDocTransformerHierarchy.java
 ---
@@ -124,10 +124,11 @@ public void testParentFilterLimitJSON() throws 
Exception {
 
 assertJQ(req("q", "type_s:donut",
 "sort", "id asc",
-"fl", "id, type_s, toppings, _nest_path_, [child limit=1]",
+"fl", "id, type_s, lonely, lonelyGrandChild, test_s, test2_s, 
_nest_path_, [child limit=1]",
--- End diff --

To my point I wrote in JIRA:  It's sad that when I see this I have no idea 
if it's right/wrong without having to go look at indexSampleData then think 
about it.  No?  (this isn't a critique of you in particular; lots of tests 
including some I've written look like the current tests here).   imagine one 
doc with some nested docs, all of which only have their ID.  Since they only 
have their ID, it's not a lot of literal text in JSON.  The BeforeClass 
unmatched docs cold use negative IDs to easily know who's who.  Any way if you 
would rather leave this as a "TODO" for another day then I understand.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread dsmiley
Github user dsmiley commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213540255
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformer.java 
---
@@ -123,6 +124,16 @@ public void transform(SolrDocument rootDoc, int 
rootDocId) {
 
 // Do we need to do anything with this doc (either ancestor or 
matched the child query)
 if (isAncestor || childDocSet == null || 
childDocSet.exists(docId)) {
+
+  if(limit != -1) {
+if(!isAncestor) {
+  if(matches == limit) {
+continue;
+  }
+  ++matches;
--- End diff --

I think matches should be incremented if it's in childDocSet (includes 
childDocSet being null).  Wether it's an ancestor or not doesn't matter I 
think.  You could pull out a new variable isInChildDocSet.  Or I suppose simply 
consider all a match; what I see what you just did as I write this; that's fine 
too.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-11-ea+28) - Build # 22761 - Unstable!

2018-08-28 Thread Policeman Jenkins Server
Error processing tokens: Error while parsing action 
'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input 
position (line 79, pos 4):
)"}
   ^

java.lang.OutOfMemoryError: Java heap space

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-28 Thread mosh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595903#comment-16595903
 ] 

mosh commented on SOLR-12519:
-

{quote}The actual number of docs returned could be more than the limit but it 
shouldn't be more than the number of intermediate parents. In the example above 
with limit 2, we'd get docB and docC with child docC.1 WDYT mosh?{quote}
Sure thing,
just pushed new commits with this new logic.

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (64bit/jdk-10.0.1) - Build # 759 - Unstable!

2018-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/759/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.servlet.DirectSolrConnectionTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.servlet.DirectSolrConnectionTest_42E92417129B9DF1-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.servlet.DirectSolrConnectionTest_42E92417129B9DF1-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.servlet.DirectSolrConnectionTest_42E92417129B9DF1-001\init-core-data-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.servlet.DirectSolrConnectionTest_42E92417129B9DF1-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([42E92417129B9DF1]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:318)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 15236 lines...]
   [junit4] Suite: org.apache.solr.servlet.DirectSolrConnectionTest
   [junit4]   2> Creating dataDir: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\solr-core\test\J1\temp\solr.servlet.DirectSolrConnectionTest_42E92417129B9DF1-001\init-core-data-001
   [junit4]   2> 4032285 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 4032286 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason="", value=0.0/0.0, ssl=0.0/0.0, 
clientAuth=0.0/0.0)
   [junit4]   2> 4032286 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 4032286 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.SolrTestCaseJ4 initCore
   [junit4]   2> 4032287 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.c.SolrResourceLoader [null] Added 2 libs to classloader, from paths: 
[/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/core/src/test-files/solr/collection1/lib,
 
/C:/Users/jenkins/workspace/Lucene-Solr-7.x-Windows/solr/core/src/test-files/solr/collection1/lib/classes]
   [junit4]   2> 4032304 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.c.SolrConfig Using Lucene MatchVersion: 7.5.0
   [junit4]   2> 4032310 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.s.IndexSchema [null] Schema name=test
   [junit4]   2> 4032312 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.s.IndexSchema Loaded schema test/1.0 with uniqueid field id
   [junit4]   2> 4032440 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.c.TransientSolrCoreCacheDefault Allocating transient cache for 2147483647 
transient cores
   [junit4]   2> 4032440 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history 
in memory.
   [junit4]   2> 4032464 INFO  
(SUITE-DirectSolrConnectionTest-seed#[42E92417129B9DF1]-worker) [] 
o.a.s.m.r.SolrJmxReporter JMX monitoring for 

[jira] [Commented] (SOLR-12055) Enable async logging by default

2018-08-28 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595854#comment-16595854
 ] 

Erick Erickson commented on SOLR-12055:
---

In a word, "no". I tried that and the test doesn't pass. I've discovered other 
problems as well, basically I reformulated the three tests into one test that 
fires up a variable number (> 2) watchers and tries to insure that each one 
gets the expected message and... that fails. Watcher 1 gets messages intended 
for watcher 2 and the like.

So I have to figure out some way to positively tie the watchers to the messages 
I guess.

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12055-slh-interim1.patch, 
> SOLR-12055-slh-interim1.patch
>
>
> When SOLR-7887 is done, switching to async logging will be a simple change to 
> the config files for log4j2. This will reduce contention and increase 
> throughput generally and logging in particular.
> There's a discussion of the pros/cons here: 
> https://logging.apache.org/log4j/2.0/manual/async.html
> An alternative is to put a note in the Ref Guide about how to enable async 
> logging.
> I guess even if we enable async by default the ref guide still needs a note 
> about how to _disable_ it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595722#comment-16595722
 ] 

Steve Rowe commented on SOLR-12704:
---

+1, LGTM.

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch, SOLR-12704.patch, 
> SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595717#comment-16595717
 ] 

Varun Thacker commented on SOLR-12704:
--

After speaking to Steve offline , updated the patch . The code comments explain 
why we keep an assert and a null check

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch, SOLR-12704.patch, 
> SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12704:
-
Attachment: SOLR-12704.patch

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch, SOLR-12704.patch, 
> SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12526.

Resolution: Not A Bug

Closing, as it is due to a bug in custom auth plugin

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1110 - Failure

2018-08-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1110/

No tests ran.

Build Log:
[...truncated 23243 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2308 links (1861 relative) to 3132 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595706#comment-16595706
 ] 

Jan Høydahl commented on SOLR-12526:


Ok, the reason seems to be that your auth plugin implements 
{{HttpClientBuilderPlugin}}, and that effectively disables 
{{PKIAuthenticationPlugin}}, see [GitHub 
link|https://github.com/apache/lucene-solr/blob/1d85cd783863f75cea133fb9c452302214165a4d/solr/core/src/java/org/apache/solr/security/PKIAuthenticationPlugin.java#L287].
 So in your HttpHeaderClientInterceptor#process you can delegate to PKI plugin 
if it is an internal request, using e.g.:
{code:java}
coreContainer.getPkiAuthenticationPlugin().setHeader(request);
{code}
I'll close this as not a bug.

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> 

[jira] [Commented] (SOLR-12055) Enable async logging by default

2018-08-28 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595687#comment-16595687
 ] 

Shawn Heisey commented on SOLR-12055:
-

If we're going to implement a hack to make a test pass, can we just wait for 
five seconds before checking for the log event?

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12055-slh-interim1.patch, 
> SOLR-12055-slh-interim1.patch
>
>
> When SOLR-7887 is done, switching to async logging will be a simple change to 
> the config files for log4j2. This will reduce contention and increase 
> throughput generally and logging in particular.
> There's a discussion of the pros/cons here: 
> https://logging.apache.org/log4j/2.0/manual/async.html
> An alternative is to put a note in the Ref Guide about how to enable async 
> logging.
> I guess even if we enable async by default the ref guide still needs a note 
> about how to _disable_ it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12692) Add hints/warnings for the ZK Status Admin UI

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595686#comment-16595686
 ] 

Varun Thacker commented on SOLR-12692:
--

Also i've seen lots of times users forget to set max snapshot counts and they 
run out of disk space .

> Add hints/warnings for the ZK Status Admin UI
> -
>
> Key: SOLR-12692
> URL: https://issues.apache.org/jira/browse/SOLR-12692
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12692.patch, wrong_zk_warning.png, zk_ensemble.png
>
>
> Firstly I love the new UI pages ( ZK Status and Nodes ) . Thanks [~janhoy] 
> for all the great work!
> I setup a 3 node ZK ensemble to play around with the UI and attaching the 
> screenshot for reference.
>  
> Here are a few suggestions I had
>  # Let’s show Approximate Size in human readable form.  We can use 
> RamUsageEstimator#humanReadableUnits to calculate it
>  # Show warning symbol when Ensemble is standalone
>  # If maxSessionTimeout < Solr's ZK_CLIENT_TIMEOUT then ZK will only honor 
> up-to the maxSessionTimeout value for the Solr->ZK connection. We could mark 
> that as a warning.
>  # If maxClientCnxns < live_nodes show this as a red? Each solr node connects 
> to all zk nodes so if the number of nodes in the cluster is high one should 
> also be increasing maxClientCnxns
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12692) Add hints/warnings for the ZK Status Admin UI

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595684#comment-16595684
 ] 

Varun Thacker commented on SOLR-12692:
--

Here's another tip I remembered 

If more than 5 ZooKeeper exists then it makes sense to mark a couple of them as 
leaderServes=false for performance reasons.

> Add hints/warnings for the ZK Status Admin UI
> -
>
> Key: SOLR-12692
> URL: https://issues.apache.org/jira/browse/SOLR-12692
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12692.patch, wrong_zk_warning.png, zk_ensemble.png
>
>
> Firstly I love the new UI pages ( ZK Status and Nodes ) . Thanks [~janhoy] 
> for all the great work!
> I setup a 3 node ZK ensemble to play around with the UI and attaching the 
> screenshot for reference.
>  
> Here are a few suggestions I had
>  # Let’s show Approximate Size in human readable form.  We can use 
> RamUsageEstimator#humanReadableUnits to calculate it
>  # Show warning symbol when Ensemble is standalone
>  # If maxSessionTimeout < Solr's ZK_CLIENT_TIMEOUT then ZK will only honor 
> up-to the maxSessionTimeout value for the Solr->ZK connection. We could mark 
> that as a warning.
>  # If maxClientCnxns < live_nodes show this as a red? Each solr node connects 
> to all zk nodes so if the number of nodes in the cluster is high one should 
> also be increasing maxClientCnxns
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12712) use JSON reach DSL format for aggregations in JSON.Facet

2018-08-28 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-12712:
---

 Summary: use JSON reach DSL format for aggregations in JSON.Facet
 Key: SOLR-12712
 URL: https://issues.apache.org/jira/browse/SOLR-12712
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Mikhail Khludnev


h2. Context 

[Aggregations|https://lucene.apache.org/solr/guide/7_4/json-facet-api.html#aggregation-functions
 are nested into facets to handle enclosing buckets. They are supplied as a 
strings expression, which is handled by ValueSourceParser or so.
h2. Problem 

Passing complex expression as a comma separated list of arguments are 
problematic, it leads to the verbose naming scheme or puzzling name overload 
convention with optional arguments. see SOLR-12711, SOLR-12325. For example, 
[StreamingExpressions|https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses]
 use name value syntax that's more powerful. 
h2. Suggesition

Either introduce JSON syntax for subfacet aggregations, or if nested facets are 
able to aggregate enclosing buckets, introduce a expandable parsers (plugin 
point) for JSON.Facet.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12704:
-
Attachment: SOLR-12704.patch

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch, SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595667#comment-16595667
 ] 

Varun Thacker commented on SOLR-12704:
--

With CHANGES entry 

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch, SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12711) Count dominating child field values

2018-08-28 Thread Mikhail Khludnev (JIRA)
Mikhail Khludnev created SOLR-12711:
---

 Summary: Count dominating child field values
 Key: SOLR-12711
 URL: https://issues.apache.org/jira/browse/SOLR-12711
 Project: Solr
  Issue Type: New Feature
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Reporter: Mikhail Khludnev


h2. Context

{{uniqueBlock(_root_)}} which was introduced in SOLR-8998 allows to count child 
field facet grouping hits by parents, ie hitting every parent only once.
h2. Problem

How to count only dominating child field value. ie if a product has 5 Red skus 
and 2 Blue, it contributes {{Red(1)}}, {{Blue(0)}}
h2. Suggestion

Introduce {{dominatingBlock(_root_)}} which aggregate hits per parent, chooses 
the dominating one and incs only it.
h2. Further Work

Judge dominating value not by number of child hits, but by the given function 
value. Like pick the most popular, best selling, random child field value as 
dominating.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595655#comment-16595655
 ] 

Varun Thacker commented on SOLR-12704:
--

{quote}followup Jira to better validate all our ContentStreamLoader and 
SolrInputDocument/SolrInputField for not allowing null key's or values
{quote}
Filed SOLR-12710

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12710) ContentStreamLoader and SolrInputDocument/SolrInputField should not allow null key's or values

2018-08-28 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12710:


 Summary: ContentStreamLoader and SolrInputDocument/SolrInputField 
should not allow null key's or values
 Key: SOLR-12710
 URL: https://issues.apache.org/jira/browse/SOLR-12710
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Today we can create a SolrInputDocument with null keys or values. We should 
validate them and throw IllegalArgumentExceptions 

We should also validate ContentStreamLoader so that no null values creep in 
there .

 

Today this test would fail because SolrInputDocument is not null(?) and also 
allows adding null / keys and values
{code:java}
doc = new SolrInputDocument(null , null);
assertNull(d);{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12704:
-
Attachment: SOLR-12704.patch

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595654#comment-16595654
 ] 

Varun Thacker commented on SOLR-12704:
--

Patch with better tests and a fix. I think we can commit this and open a 
followup Jira to better validate all our ContentStreamLoader and 
SolrInputDocument/SolrInputField for not allowing null key's or values

 

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch, SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12055) Enable async logging by default

2018-08-28 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595628#comment-16595628
 ] 

Erick Erickson commented on SOLR-12055:
---

What I _think_ I'm seeing is that there is a lag between sending a log message 
and it gets written. Which is only to be expected when we go to async. So the 
test failure goes away with the following hack (and no, I'm not suggesting this 
as a "fix").

 
{code:java}
@Test
public void eoe() {
LogWatcher watcher = LogWatcher.newRegisteredLogWatcher(config, null);

assertEquals(watcher.getLastEvent(), -1);

log.warn("This is a test message");
assertTrue(watcher.getLastEvent() > -1);

watcher = LogWatcher.newRegisteredLogWatcher(config, null);

assertEquals(watcher.getLastEvent(), -1);

log.warn("This is a test message");
long last = -1;
for (int idx = 0; last == -1 && idx < 10; ++idx) {
  last = watcher.getLastEvent();

  System.out.println("lastEvent: " + last);
}
assertTrue(watcher.getLastEvent() > -1);
}
{code}
After a few rounds, printing last changes from -1 to something > 1 and the test 
succeeds.

I've poked around briefly and don't see anything to insure that all sent 
messages have been recorded, nor anything to flush the queue. I'll keep digging 
but if anyone knows off the top of their head

 Interestingly, sleeping fails (not that I like that solution either) and the 
first assert succeeds (although that may well be coincidence).

Digging

 

> Enable async logging by default
> ---
>
> Key: SOLR-12055
> URL: https://issues.apache.org/jira/browse/SOLR-12055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Attachments: SOLR-12055-slh-interim1.patch, 
> SOLR-12055-slh-interim1.patch
>
>
> When SOLR-7887 is done, switching to async logging will be a simple change to 
> the config files for log4j2. This will reduce contention and increase 
> throughput generally and logging in particular.
> There's a discussion of the pros/cons here: 
> https://logging.apache.org/log4j/2.0/manual/async.html
> An alternative is to put a note in the Ref Guide about how to enable async 
> logging.
> I guess even if we enable async by default the ref guide still needs a note 
> about how to _disable_ it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread Michal Hlavac (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595601#comment-16595601
 ] 

Michal Hlavac commented on SOLR-12526:
--

Truth is that I didn't realize that it might work with standard authentication 
plugins. I am ok with disabled metrics history, so at the end it doesn't looks 
like bug.

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-12634) Add gaussfit Stream Evaluator

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595599#comment-16595599
 ] 

ASF subversion and git services commented on SOLR-12634:


Commit 751519909ce7c00bcd85f7767b1f93e1bc9b3b94 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7515199 ]

SOLR-12634: Add gaussfit to the Math Expressions user guide


> Add gaussfit Stream Evaluator
> -
>
> Key: SOLR-12634
> URL: https://issues.apache.org/jira/browse/SOLR-12634
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12634.patch
>
>
> The gaussFit Stream Evaluator fits a gaussian curve to a data set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12634) Add gaussfit Stream Evaluator

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595598#comment-16595598
 ] 

ASF subversion and git services commented on SOLR-12634:


Commit 1cfc735fff05bd2287adb85a6c8ad28ed96926b7 in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1cfc735 ]

SOLR-12634: Add gaussfit to the Math Expressions user guide


> Add gaussfit Stream Evaluator
> -
>
> Key: SOLR-12634
> URL: https://issues.apache.org/jira/browse/SOLR-12634
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Fix For: 7.5
>
> Attachments: SOLR-12634.patch
>
>
> The gaussFit Stream Evaluator fits a gaussian curve to a data set.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-232) let Solr set request headers (for logging)

2018-08-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-232.
--
Resolution: Won't Fix

Closing ancient issue - we already have all of this in the logs.

> let Solr set request headers (for logging)
> --
>
> Key: SOLR-232
> URL: https://issues.apache.org/jira/browse/SOLR-232
> Project: Solr
>  Issue Type: New Feature
> Environment: tomcat?
>Reporter: Ian Holsman
>Priority: Minor
> Attachments: SOLR-232.patch, meta.patch
>
>
> I need the ability to log certain information about a request so that I can 
> feed it into performance and capacity monitoring systems.
> I would like to know things like
> - how long the request took 
> - how many rows were fetched and returned
> - what handler was called.
> per request.
> the following patch is 1 way to implement this, I'm sure there are better 
> ways.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread Michal Hlavac (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1659#comment-1659
 ] 

Michal Hlavac commented on SOLR-12526:
--

[~janhoy], sorry, but I didn't have too much time for that. Anyway I can attach 
source code. Basicaly it's very simple authentication plugin. 
[^ProxyAuthPlugin.java] only checks for specific HTTP header from another HTTP 
server (e.g. Apache httpd) and trust them.

What is specific about this plugin is, that I need to know username also on 
every node of cluster, also when asking for response from specific shard. 
That's because I use document level authorization.

It's possible that I'm doing something wrong. Thanks

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> 

[jira] [Updated] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread Michal Hlavac (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Hlavac updated SOLR-12526:
-
Attachment: ProxyAuthPlugin.java

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
> Attachments: ProxyAuthPlugin.java
>
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread Michal Hlavac (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michal Hlavac updated SOLR-12526:
-
Comment: was deleted

(was: [~janhoy], sorry, but I didn't have too much time for that. Anyway I can 
attach source code. Basicaly it's very simple authentication plugin. 
{{AuthProxyPlugin}} only checks for)

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, 

[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread Michal Hlavac (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595548#comment-16595548
 ] 

Michal Hlavac commented on SOLR-12526:
--

[~janhoy], sorry, but I didn't have too much time for that. Anyway I can attach 
source code. Basicaly it's very simple authentication plugin. 
{{AuthProxyPlugin}} only checks for

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional 

[jira] [Commented] (SOLR-12526) Metrics History doesn't work with AuthenticationPlugin

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12526?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595538#comment-16595538
 ] 

Jan Høydahl commented on SOLR-12526:


[~hlavki] Any luck in reproducing the issue with BasicAuth plugin? If not, can 
you tell more about how your custom auth plugin works?

> Metrics History doesn't work with AuthenticationPlugin
> --
>
> Key: SOLR-12526
> URL: https://issues.apache.org/jira/browse/SOLR-12526
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics
>Affects Versions: 7.4
>Reporter: Michal Hlavac
>Priority: Critical
>
> Since solr 7.4.0 there is Metrics History which uses SOLRJ client to make 
> http requests to SOLR. But it doesnt work with AuthenticationPlugin. Since 
> its enabled by default, there are errors in log every time 
> {{MetricsHistoryHandler}} tries to collect data.
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://172.20.0.5:8983/solr: Expected mime type 
> application/octet-stream but got text/html. 
> 
> 
> Error 401 require authentication
> 
> HTTP ERROR 401
> Problem accessing /solr/admin/metrics. Reason:
>     require authentication
> 
> 
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219) 
> ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$ClientSnitchCtx.invoke(SolrClientNodeStateProvider.java:292)
>  ~[solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:1
> 4]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.fetchMetrics(SolrClientNodeStateProvider.java:150)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider$AutoScalingSnitch.getRemoteInfo(SolrClientNodeStateProvider.java:199)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18
> 16:55:14]
>    at 
> org.apache.solr.common.cloud.rule.ImplicitSnitch.getTags(ImplicitSnitch.java:76)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:111)
>  [solr-solrj-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:14]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectGlobalMetrics(MetricsHistoryHandler.java:495)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.collectMetrics(MetricsHistoryHandler.java:368)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> org.apache.solr.handler.admin.MetricsHistoryHandler.lambda$new$0(MetricsHistoryHandler.java:230)
>  [solr-core-7.4.0.jar:7.4.0 9060ac689c270b02143f375de0348b7f626adebc - 
> jpountz - 2018-06-18 16:55:13]
>    at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:514) [?:?]
>    at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
> [?:?]
>    at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1135)
>  [?:?]
>    at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
>  [?:?]
>    at java.lang.Thread.run(Thread.java:844) [?:?]
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Edward Ribeiro (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595370#comment-16595370
 ] 

Edward Ribeiro edited comment on SOLR-5163 at 8/28/18 8:03 PM:
---

Hi [~Charles Sanders], a couple of questions about your patch (congrats for 
contributing, btw!):
{code:java}
validateQueryFields(req);{code}
You pass req, but req is only used to get the Schema, so why not pass the 
schema, i.e., validateQueryFields(req.getSchema())?
{code:java}
protected void validateQueryFields(SolrQueryRequest req) throws SyntaxError {
 if (queryFields == null || queryFields.isEmpty()) {
throw new SyntaxError("No query fields given.");
 }{code}
If df is not specified then the parser will resort to use df (or throw an 
exception if neither is specified). Therefore, even tough this if clause is a 
nice defensive guard I don't think it really is worth now, because if 
queryFields is empty the error will be thrown before reaching this method. And 
even if is empty then the result is that the for-loop is not traversed.

Finally, 
{code:java}
req.getSchema().getFields().keySet(){code}
could be extracted to a variable before entering the loop, instead of being 
called for each field.

 

Best regards!


was (Author: eribeiro):
Hi [~Charles Sanders], a couple of questions about your patch (congrats for 
contributing, btw!):
{code:java}
validateQueryFields(req);{code}
You pass req, but req is only used to get the Schema, so why not pass the 
schema, i.e., validateQueryFields(req.getSchema())?
{code:java}
protected void validateQueryFields(SolrQueryRequest req) throws SyntaxError {
 if (queryFields == null || queryFields.isEmpty()) {
throw new SyntaxError("No query fields given.");
 }{code}
If df is not specified then the parser will resort to use df (or throw an 
exception if neither is specified). Therefore, even tough this if clause is a 
nice defensive guard I don't think it really is worth now, because if 
queryFields is empty the error will be thrown before reaching this method. And 
even if is empty then the result is that the for-loop is not traversed.

Finally, 
{code:java}
req.getSchema().getFields().keySet(){code}
could be extracted to a variable before entering the loop, instead of being 
called for each field.

 

Probably the test to be written should go here: 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java

Best regards!

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Edward Ribeiro (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595370#comment-16595370
 ] 

Edward Ribeiro edited comment on SOLR-5163 at 8/28/18 8:01 PM:
---

Hi [~Charles Sanders], a couple of questions about your patch (congrats for 
contributing, btw!):
{code:java}
validateQueryFields(req);{code}
You pass req, but req is only used to get the Schema, so why not pass the 
schema, i.e., validateQueryFields(req.getSchema())?
{code:java}
protected void validateQueryFields(SolrQueryRequest req) throws SyntaxError {
 if (queryFields == null || queryFields.isEmpty()) {
throw new SyntaxError("No query fields given.");
 }{code}
If df is not specified then the parser will resort to use df (or throw an 
exception if neither is specified). Therefore, even tough this if clause is a 
nice defensive guard I don't think it really is worth now, because if 
queryFields is empty the error will be thrown before reaching this method. And 
even if is empty then the result is that the for-loop is not traversed.

Finally, 
{code:java}
req.getSchema().getFields().keySet(){code}
could be extracted to a variable before entering the loop, instead of being 
called for each field.

 

Probably the test to be written should go here: 
https://github.com/apache/lucene-solr/blob/master/solr/core/src/test/org/apache/solr/search/TestExtendedDismaxParser.java

Best regards!


was (Author: eribeiro):
Hi [~Charles Sanders], a couple of questions about your patch (congrats for 
contributing, btw!):


{code:java}
validateQueryFields(req);{code}
You pass req, but req is only used to get the Schema, so why not pass the 
schema, i.e., validateQueryFields(req.getSchema())?


{code:java}
protected void validateQueryFields(SolrQueryRequest req) throws SyntaxError {
 if (queryFields == null || queryFields.isEmpty()) {
throw new SyntaxError("No query fields given.");
 }{code}
If df is not specified then the parser will resort to use df (or throw an 
exception if neither is specified). Therefore, even tough this if clause is a 
nice defensive guard I don't think it really is worth now, because if 
queryFields is empty the error will be thrown before reaching this method. And 
even if is empty then the result is that the for-loop is not traversed.

Finally, 
{code:java}
req.getSchema().getFields().keySet(){code}
could be extracted to a variable before entering the loop, instead of being 
called for each field.

Best regards!

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2646 - Still Unstable!

2018-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2646/
Java: 64bit/jdk1.8.0_172 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster.testSearchRate

Error Message:
Captured an uncaught exception in thread: Thread[id=297, name=Simulated 
OverseerAutoScalingTriggerThread, state=RUNNABLE, group=Simulated Overseer 
autoscaling triggers]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=297, name=Simulated 
OverseerAutoScalingTriggerThread, state=RUNNABLE, group=Simulated Overseer 
autoscaling triggers]
Caused by: java.lang.NullPointerException
at __randomizedtesting.SeedInfo.seed([2101E7D53505ABE5]:0)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy.lambda$readPerReplicaAttrs$4(Policy.java:153)
at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1382)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy.readPerReplicaAttrs(Policy.java:155)
at 
org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:148)
at 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
at 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.withTriggerConfigs(AutoScalingConfig.java:467)
at 
org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.withTriggerConfig(AutoScalingConfig.java:478)
at 
org.apache.solr.cloud.autoscaling.OverseerTriggerThread.withDefaultTrigger(OverseerTriggerThread.java:379)
at 
org.apache.solr.cloud.autoscaling.OverseerTriggerThread.withAutoAddReplicasTrigger(OverseerTriggerThread.java:359)
at 
org.apache.solr.cloud.autoscaling.OverseerTriggerThread.run(OverseerTriggerThread.java:132)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12385 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.autoscaling.sim.TestSimLargeCluster_2101E7D53505ABE5-001/init-core-data-001
   [junit4]   2> 22468 WARN  
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.SolrTestCaseJ4 startTrackingSearchers: numOpens=1 numCloses=1
   [junit4]   2> 22469 INFO  
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=true
   [junit4]   2> 22470 INFO  
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (true) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 22470 INFO  
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 22555 DEBUG 
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.c.a.s.SimClusterStateProvider --- new Overseer leader: 
127.0.0.1:1_solr
   [junit4]   2> 22594 INFO  
(SUITE-TestSimLargeCluster-seed#[2101E7D53505ABE5]-worker) [] 
o.a.s.h.a.MetricsHistoryHandler No .system collection, keeping metrics history 
in memory.
   [junit4] IGNOR/A 0.01s J0 | TestSimLargeCluster.testBasic
   [junit4]> Assumption #1: 'badapple' test group is disabled 
(@BadApple(bugUrl=https://issues.apache.org/jira/browse/SOLR-12028))
   [junit4]   2> 22634 INFO  
(TEST-TestSimLargeCluster.testSearchRate-seed#[2101E7D53505ABE5]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testSearchRate
   [junit4]   2> 22635 INFO  
(TEST-TestSimLargeCluster.testSearchRate-seed#[2101E7D53505ABE5]) [] 
o.a.s.c.a.s.SimCloudManager === Restarting OverseerTriggerThread and clearing 
object cache...
   [junit4]   2> 22635 DEBUG 
(TEST-TestSimLargeCluster.testSearchRate-seed#[2101E7D53505ABE5]) [] 
o.a.s.c.a.ScheduledTriggers Shutting down scheduled thread pool executor now
   [junit4]   2> 22635 DEBUG 
(TEST-TestSimLargeCluster.testSearchRate-seed#[2101E7D53505ABE5]) [] 
o.a.s.c.a.ScheduledTriggers Shutting down action executor now
   [junit4]   2> 22635 DEBUG 
(TEST-TestSimLargeCluster.testSearchRate-seed#[2101E7D53505ABE5]) [] 
o.a.s.c.a.ScheduledTriggers 

[jira] [Commented] (SOLR-12121) JWT Authentication plugin

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595489#comment-16595489
 ] 

Jan Høydahl commented on SOLR-12121:


Pushed new commits to PR
 * Tranfer of Principal object as explained above
 * Also transfer Principal for update requests (SolrCmdDistributor)
 * Integration tests (currently using plain HTTP requests, not SolrJ)

 

> JWT Authentication plugin
> -
>
> Key: SOLR-12121
> URL: https://issues.apache.org/jira/browse/SOLR-12121
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: master (8.0), 7.5
>
> Attachments: image-2018-08-27-13-04-04-183.png
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> A new Authentication plugin that will accept a [Json Web 
> Token|https://en.wikipedia.org/wiki/JSON_Web_Token] (JWT) in the 
> Authorization header and validate it by checking the cryptographic signature. 
> The plugin will not perform the authentication itself but assert that the 
> user was authenticated by the service that issued the JWT token.
> JWT defined a number of standard claims, and user principal can be fetched 
> from the {{sub}} (subject) claim and passed on to Solr. The plugin will 
> always check the {{exp}} (expiry) claim and optionally enforce checks on the 
> {{iss}} (issuer) and {{aud}} (audience) claims.
> The first version of the plugin will only support RSA signing keys and will 
> support fetching the public key of the issuer through a [Json Web 
> Key|https://tools.ietf.org/html/rfc7517] (JWK) file, either from a https URL 
> or from local file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595479#comment-16595479
 ] 

Varun Thacker commented on SOLR-12704:
--

Just a POC patch to demonstrate the problem. This test case will fail. 

While addressing the NPE we should move this test to 
AddSchemaFieldsUpdateProcessorFactoryTest ?

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12704) NPE in AddSchemaFieldsUpdateProcessorFactory

2018-08-28 Thread Varun Thacker (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12704?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-12704:
-
Attachment: SOLR-12704.patch

> NPE in AddSchemaFieldsUpdateProcessorFactory 
> -
>
> Key: SOLR-12704
> URL: https://issues.apache.org/jira/browse/SOLR-12704
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2.1
>Reporter: Varun Thacker
>Priority: Major
> Attachments: SOLR-12704.patch
>
>
> Here's a stack trace from a Solr 7.2.1 instance where we hit an NPE 
> {code:java}
> ERROR - date; org.apache.solr.common.SolrException; 
> java.lang.NullPointerException
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.mapValueClassesToFieldType(AddSchemaFieldsUpdateProcessorFactory.java:509)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:396)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92)
> at 
> org.apache.solr.handler.loader.JavabinLoader$1.update(JavabinLoader.java:98)
> {code}
> I don't have the document that was causing this issue unfortunately. I'll 
> spend some time writing a test case to reproduce this



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595463#comment-16595463
 ] 

Jan Høydahl commented on SOLR-9272:
---

It's not much left for this to go in. Have not checked your proposal Steve 
about looking at {{SOLR_URL_SCHEME}} but it sounds like a very good idea. Would 
you like to explore it as a last improvement [~sarkaramr...@gmail.com]?

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595456#comment-16595456
 ] 

David Smiley commented on SOLR-12519:
-

As I start to write out the notes on the change in semantics of "limit", and 
look back at the test, I think the limit interpretation is actually worse now.  
My bad (palm to face!). 

A documents's children come first and are left-to-right (low to high).  It's 
the intermediate parents that get placed after, and so it's not quite as simple 
as strictly left-right or right-left when wanting an ideal "limit".  I don't 
think the semantics of "limit" should be changed for existing users; there is 
no path metadata and we might as well start at the lowest.  For a simple flat 
list of child docs, it's the right thing to do.

 (made up syntax of a nested docA with some nested children)
{noformat}
docA:{ docB, docC:{ docC.1, docC.2}, docD}
{noformat}
Will get serialized/flattened like so:
{noformat}
docB, docC.1, docC.2, docC, docD, docA
{noformat}

Lets say we match all child docs (not filtered).
Consider a limit of 1.  Arguably, docB ought to be the sole child added.  
That's what happens currently, but soon will be docD.  :-/
Consdier a limit of 2.  Arguably, docB then docC ought to be added. That's 
_not_ what happens currently (docB & docC.1), and soon won't do that either 
(docC & docD).  But since we have the metadata, we are in a position to do it 
right.

Disclaimer: I didn't test-out the above; it's all from intuition.

It's kinda embarrassing we didn't see this after discussing it a bit and 
"correcting" tests.  Maybe the testing methodology doesn't make this 
in-your-face enough?  I've advocated before about the virtues of testing an 
entire document structure as a string because all is laid bare to see -- it's 
very _direct_; less to think about.  This goes hand-in-hand with indexing a 
simple document literally in the same test method as the test, instead of 
algorithmically generating documents (perhaps complex ones) in some other 
method.  There are certainly pros/cons both ways.

What might the fix be?  I think we should loop from the lowest docID underneath 
the parent (as it was before).  And as we go, we can accumulate a counter of 
how many docs have been added.  If we've reached that counter, then from that 
point forward, we only want intermediate docs to already-accumulated docs (i.e. 
only collect ancestors).  The actual number of docs returned could be more than 
the limit but it shouldn't be more than the number of intermediate parents.  In 
the example above with limit 2, we'd get docB and docC with child docC.1  WDYT 
[~moshebla]?

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5143) rm or formalize dealing with "general" KEYS files in our dist dir

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LUCENE-5143?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595452#comment-16595452
 ] 

Jan Høydahl commented on LUCENE-5143:
-

Ping

> rm or formalize dealing with "general" KEYS files in our dist dir
> -
>
> Key: LUCENE-5143
> URL: https://issues.apache.org/jira/browse/LUCENE-5143
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Hoss Man
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 7.5, master (8.0)
>
> Attachments: KEYS, KEYS, KEYS, KEYS, LUCENE-5143.patch, 
> LUCENE-5143.patch, LUCENE-5143.patch, LUCENE-5143.patch, 
> LUCENE-5143_READMEs.patch, LUCENE-5143_READMEs.patch, 
> LUCENE-5143_READMEs.patch, LUCENE_5143_KEYS.patch, verify.log, verify.sh, 
> verify.sh, verify.sh
>
>
> At some point in the past, we started creating a snapshots of KEYS (taken 
> from the auto-generated data from id.apache.org) in the release dir of each 
> release...
> http://www.apache.org/dist/lucene/solr/4.4.0/KEYS
> http://www.apache.org/dist/lucene/java/4.4.0/KEYS
> http://archive.apache.org/dist/lucene/java/4.3.0/KEYS
> http://archive.apache.org/dist/lucene/solr/4.3.0/KEYS
> etc...
> But we also still have some "general" KEYS files...
> https://www.apache.org/dist/lucene/KEYS
> https://www.apache.org/dist/lucene/java/KEYS
> https://www.apache.org/dist/lucene/solr/KEYS
> ...which (as i discovered when i went to add my key to them today) are stale 
> and don't seem to be getting updated.
> I vaguely remember someone (rmuir?) explaining to me at one point the reason 
> we started creating a fresh copy of KEYS in each release dir, but i no longer 
> remember what they said, and i can't find any mention of a reason in any of 
> the release docs, or in any sort of comment in buildAndPushRelease.py
> we probably do one of the following:
>  * remove these "general" KEYS files
>  * add a disclaimer to the top of these files that they are legacy files for 
> verifying old releases and are no longer used for new releases
>  * ensure these files are up to date stop generating per-release KEYS file 
> copies
>  * update our release process to ensure that the general files get updated on 
> each release as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3243) eDismax and non-fielded range query

2018-08-28 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-3243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-3243:
--
Fix Version/s: (was: 6.0)
   (was: 4.9)

> eDismax and non-fielded range query
> ---
>
> Key: SOLR-3243
> URL: https://issues.apache.org/jira/browse/SOLR-3243
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 3.1, 3.2, 3.3, 3.4, 3.5
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Attachments: SOLR-3243.patch
>
>
> Reported by Bill Bell in SOLR-3085:
> If you enter a non-fielded open-ended range in the search box, like [* TO *], 
> eDismax will expand it to all fields:
> {noformat}
> +DisjunctionMaxQuery((content:[* TO *]^2.0 | id:[* TO *]^50.0 | author:[* TO 
> *]^15.0 | meta:[* TO *]^10.0 | name:[* TO *]^20.0))
> {noformat}
> This does not make sense, and a side effect is that range queries for strings 
> are very expensive, open-ended even more, and you can totally crash the 
> search server by hammering something like ([* TO *] OR [* TO *] OR [* TO *]) 
> a few times...



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 144 - Unstable

2018-08-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/144/

1 tests failed.
FAILED:  
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testTriangularDistribution

Error Message:
expected:<29.48574542532363> but was:<30.0>

Stack Trace:
java.lang.AssertionError: expected:<29.48574542532363> but was:<30.0>
at 
__randomizedtesting.SeedInfo.seed([F5123C13FD6569C2:61E90323C5A396D3]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:443)
at org.junit.Assert.assertEquals(Assert.java:512)
at 
org.apache.solr.client.solrj.io.stream.MathExpressionTest.testTriangularDistribution(MathExpressionTest.java:3991)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 15660 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.io.stream.MathExpressionTest
   [junit4]   2> Creating dataDir: 

[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213400327
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -405,4 +435,54 @@ protected void doClose() {
 collection, slice.getName(), 
DistributedUpdateProcessor.MAX_RETRIES_ON_FORWARD_DEAULT);
   }
 
+
+  /**
+   * Create as many collections as required. This method loops to allow 
for the possibility that the routeTimestamp
+   * requires more than one collection to be created. Since multiple 
threads may be invoking maintain on separate
+   * requests to the same alias, we must pass in the name of the 
collection that this thread believes to be the most
+   * recent collection. This assumption is checked when the command is 
executed in the overseer. When this method
+   * finds that all collections required have been created it returns the 
(possibly new) most recent collection.
+   * The return value is ignored by the calling code in the async 
preemptive case.
+   *
+   * @param targetCollection the initial notion of the latest collection 
available.
+   * @param docTimestamp the timestamp from the document that determines 
routing
+   * @param printableId an identifier for the add command used in error 
messages
+   * @return The latest collection, including collections created during 
maintenance
+   */
+  public String maintain(String targetCollection, Instant docTimestamp, 
String printableId, boolean asyncSinglePassOnly) {
+do { // typically we don't loop; it's only when we need to create a 
collection
+
+  // Note: This code no longer short circuits immediately when it sees 
that the expected latest
--- End diff --

That's fine, the comment mostly aimed at making the review process clearer. 
You're right it probably doesn't need to be carried forward


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212806120
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -230,6 +188,95 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
 }
   }
 
+
+  private String createCollectionsIfRequired(Instant docTimestamp, String 
targetCollection, String printableId) {
+// Even though it is possible that multiple requests hit this code in 
the 1-2 sec that
+// it takes to create a collection, it's an established anti-pattern 
to feed data with a very large number
+// of client connections. This in mind, we only guard against spamming 
the overseer within a batch of
+// updates. We are intentionally tolerating a low level of redundant 
requests in favor of simpler code. Most
+// super-sized installations with many update clients will likely be 
multi-tenant and multiple tenants
+// probably don't write to the same alias. As such, we have deferred 
any solution to the "many clients causing
+// collection creation simultaneously" problem until such time as 
someone actually has that problem in a
+// real world use case that isn't just an anti-pattern.
+try {
+  CreationType creationType = requiresCreateCollection(docTimestamp, 
timeRoutedAlias.getPreemptiveCreateWindow());
+  switch (creationType) {
+case SYNCHRONOUS:
+  // This next line blocks until all collections required by the 
current document have been created
+  return maintain(targetCollection, docTimestamp, printableId, 
false);
+case ASYNC_PREEMPTIVE:
+  // Note: creating an executor and throwing it away is slightly 
expensive, but this is only likely to happen
+  // once per hour/day/week (depending on time slice size for the 
TRA). If the executor were retained, it
+  // would need to be shut down in a close hook to avoid test 
failures due to thread leaks which is slightly
+  // more complicated from a code maintenance and readability 
stand point. An executor must used instead of a
+  // thread to ensure we pick up the proper MDC logging stuff from 
ExecutorUtil. T
+  if (preemptiveCreationExecutor == null) {
+DefaultSolrThreadFactory threadFactory = new 
DefaultSolrThreadFactory("TRA-preemptive-creation");
+preemptiveCreationExecutor = 
newMDCAwareSingleThreadExecutor(threadFactory);
+preemptiveCreationExecutor.execute(() -> {
--- End diff --

I've not wanted to create 2 places in the code where we do the same thing, 
but I think I figured out how to factor it so it's both clearer and 
non-duplicative...


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213419315
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
 ---
@@ -392,29 +393,73 @@ public void testPreemptiveCreation() throws Exception 
{
 CollectionAdminRequest.setAliasProperty(alias)
 .addProperty(TimeRoutedAlias.ROUTER_PREEMPTIVE_CREATE_MATH, 
"3DAY").process(solrClient);
 
-Thread.sleep(1000); // a moment to be sure the alias change has taken 
effect
-
 assertUpdateResponse(add(alias, Collections.singletonList(
 sdoc("id", "7", "timestamp_dt", "2017-10-25T23:01:00Z")), // 
should cause preemptive creation now
 params));
 assertUpdateResponse(solrClient.commit(alias));
 waitCol("2017-10-27", numShards);
-waitCol("2017-10-28", numShards);
 
 cols = new 
CollectionAdminRequest.ListAliases().process(solrClient).getAliasesAsLists().get(alias);
-assertEquals(6,cols.size());
+assertEquals(5,cols.size()); // only one created in async case
 assertNumDocs("2017-10-23", 1);
 assertNumDocs("2017-10-24", 1);
 assertNumDocs("2017-10-25", 5);
 assertNumDocs("2017-10-26", 0);
 assertNumDocs("2017-10-27", 0);
+
+assertUpdateResponse(add(alias, Collections.singletonList(
+sdoc("id", "8", "timestamp_dt", "2017-10-25T23:01:00Z")), // 
should cause preemptive creation now
+params));
+assertUpdateResponse(solrClient.commit(alias));
+waitCol("2017-10-27", numShards);
+waitCol("2017-10-28", numShards);
+
+cols = new 
CollectionAdminRequest.ListAliases().process(solrClient).getAliasesAsLists().get(alias);
+assertEquals(6,cols.size()); // Subsequent documents continue to 
create up to limit
+assertNumDocs("2017-10-23", 1);
+assertNumDocs("2017-10-24", 1);
+assertNumDocs("2017-10-25", 6);
+assertNumDocs("2017-10-26", 0);
+assertNumDocs("2017-10-27", 0);
 assertNumDocs("2017-10-28", 0);
 
 QueryResponse resp;
 resp = solrClient.query(alias, params(
 "q", "*:*",
 "rows", "10"));
-assertEquals(7, resp.getResults().getNumFound());
+assertEquals(8, resp.getResults().getNumFound());
+
+assertUpdateResponse(add(alias, Arrays.asList(
--- End diff --

addDocsAndCommit contains a lot of logic I don't really want, especially 
the shuffling of the input documents! 


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212805945
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -167,59 +167,17 @@ private String getAliasName() {
   public void processAdd(AddUpdateCommand cmd) throws IOException {
 SolrInputDocument solrInputDocument = cmd.getSolrInputDocument();
 final Object routeValue = 
solrInputDocument.getFieldValue(timeRoutedAlias.getRouteField());
-final Instant routeTimestamp = parseRouteKey(routeValue);
-
+final Instant docTimestampToRoute = parseRouteKey(routeValue);
 updateParsedCollectionAliases();
-String targetCollection;
-do { // typically we don't loop; it's only when we need to create a 
collection
-  targetCollection = 
findTargetCollectionGivenTimestamp(routeTimestamp);
-
-  if (targetCollection == null) {
-throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
-"Doc " + cmd.getPrintableId() + " couldn't be routed with " + 
timeRoutedAlias.getRouteField() + "=" + routeTimestamp);
-  }
-
-  // Note: the following rule is tempting but not necessary and is not 
compatible with
-  // only using this URP when the alias distrib phase is NONE; 
otherwise a doc may be routed to from a non-recent
-  // collection to the most recent only to then go there directly 
instead of realizing a new collection is needed.
-  //  // If it's going to some other collection (not "this") then 
break to just send it there
-  //  if (!thisCollection.equals(targetCollection)) {
-  //break;
-  //  }
-  // Also tempting but not compatible:  check that we're the leader, 
if not then break
-
-  // If the doc goes to the most recent collection then do some checks 
below, otherwise break the loop.
-  final Instant mostRecentCollTimestamp = 
parsedCollectionsDesc.get(0).getKey();
-  final String mostRecentCollName = 
parsedCollectionsDesc.get(0).getValue();
-  if (!mostRecentCollName.equals(targetCollection)) {
-break;
-  }
-
-  // Check the doc isn't too far in the future
-  final Instant maxFutureTime = 
Instant.now().plusMillis(timeRoutedAlias.getMaxFutureMs());
-  if (routeTimestamp.isAfter(maxFutureTime)) {
-throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
-"The document's time routed key of " + routeValue + " is too 
far in the future given " +
-TimeRoutedAlias.ROUTER_MAX_FUTURE + "=" + 
timeRoutedAlias.getMaxFutureMs());
-  }
-
-  // Create a new collection?
-  final Instant nextCollTimestamp = 
timeRoutedAlias.computeNextCollTimestamp(mostRecentCollTimestamp);
-  if (routeTimestamp.isBefore(nextCollTimestamp)) {
-break; // thus we don't need another collection
-  }
-
-  createCollectionAfter(mostRecentCollName); // *should* throw if 
fails for some reason but...
-  final boolean updated = updateParsedCollectionAliases();
-  if (!updated) { // thus we didn't make progress...
-// this is not expected, even in known failure cases, but we check 
just in case
-throw new SolrException(SolrException.ErrorCode.SERVER_ERROR,
-"We need to create a new time routed collection but for 
unknown reasons were unable to do so.");
-  }
-  // then retry the loop ...
-} while(true);
-assert targetCollection != null;
-
+String candidateCollection = 
findCandidateCollectionGivenTimestamp(docTimestampToRoute, 
cmd.getPrintableId());
--- End diff --

+1


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213405495
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
 ---
@@ -322,6 +325,104 @@ public void testSliceRouting() throws Exception {
 }
   }
 
+  @Test
+  public void testPreemptiveCreation() throws Exception {
+String configName = TimeRoutedAliasUpdateProcessorTest.configName + 
getTestName();
+createConfigSet(configName);
+
+final int numShards = 1 ;
+final int numReplicas = 1 ;
+CollectionAdminRequest.createTimeRoutedAlias(alias, 
"2017-10-23T00:00:00Z", "+1DAY", timeField,
+CollectionAdminRequest.createCollection("_unused_", configName, 
numShards, numReplicas)
+
.setMaxShardsPerNode(numReplicas)).setPreemptiveCreateWindow("3HOUR")
+.process(solrClient);
+
+// cause some collections to be created
+assertUpdateResponse(solrClient.add(alias,
+sdoc("id","1","timestamp_dt", "2017-10-25T00:00:00Z")
--- End diff --

comments (now below) address the purpose of each collection


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212805841
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -167,59 +167,17 @@ private String getAliasName() {
   public void processAdd(AddUpdateCommand cmd) throws IOException {
 SolrInputDocument solrInputDocument = cmd.getSolrInputDocument();
 final Object routeValue = 
solrInputDocument.getFieldValue(timeRoutedAlias.getRouteField());
-final Instant routeTimestamp = parseRouteKey(routeValue);
-
+final Instant docTimestampToRoute = parseRouteKey(routeValue);
--- End diff --

+1


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213397248
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -230,6 +188,95 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
 }
   }
 
+
+  private String createCollectionsIfRequired(Instant docTimestamp, String 
targetCollection, String printableId) {
+// Even though it is possible that multiple requests hit this code in 
the 1-2 sec that
+// it takes to create a collection, it's an established anti-pattern 
to feed data with a very large number
+// of client connections. This in mind, we only guard against spamming 
the overseer within a batch of
+// updates. We are intentionally tolerating a low level of redundant 
requests in favor of simpler code. Most
+// super-sized installations with many update clients will likely be 
multi-tenant and multiple tenants
+// probably don't write to the same alias. As such, we have deferred 
any solution to the "many clients causing
+// collection creation simultaneously" problem until such time as 
someone actually has that problem in a
+// real world use case that isn't just an anti-pattern.
+try {
+  CreationType creationType = requiresCreateCollection(docTimestamp, 
timeRoutedAlias.getPreemptiveCreateWindow());
+  switch (creationType) {
+case SYNCHRONOUS:
+  // This next line blocks until all collections required by the 
current document have been created
+  return maintain(targetCollection, docTimestamp, printableId, 
false);
+case ASYNC_PREEMPTIVE:
+  // Note: creating an executor and throwing it away is slightly 
expensive, but this is only likely to happen
+  // once per hour/day/week (depending on time slice size for the 
TRA). If the executor were retained, it
+  // would need to be shut down in a close hook to avoid test 
failures due to thread leaks which is slightly
+  // more complicated from a code maintenance and readability 
stand point. An executor must used instead of a
+  // thread to ensure we pick up the proper MDC logging stuff from 
ExecutorUtil. T
+  if (preemptiveCreationExecutor == null) {
+DefaultSolrThreadFactory threadFactory = new 
DefaultSolrThreadFactory("TRA-preemptive-creation");
+preemptiveCreationExecutor = 
newMDCAwareSingleThreadExecutor(threadFactory);
+preemptiveCreationExecutor.execute(() -> {
+  maintain(targetCollection, docTimestamp, printableId, true);
+  preemptiveCreationExecutor.shutdown();
+  preemptiveCreationExecutor = null;
+});
+  }
+  return targetCollection;
+case NONE:
+  return targetCollection; // just for clarity...
+default:
+  return targetCollection; // could use fall through, but fall 
through is fiddly for later editors.
+  }
+  // do nothing if creationType == NONE
+} catch (SolrException e) {
+  throw e;
+} catch (Exception e) {
+  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
+}
+  }
+
+  /**
+   * Determine if the a new collection will be required based on the 
document timestamp. Passing null for
+   * preemptiveCreateInterval tells you if the document is beyond all 
existing collections with a response of
+   * {@link CreationType#NONE} or {@link CreationType#SYNCHRONOUS}, and 
passing a valid date math for
+   * preemptiveCreateMath additionally distinguishes the case where the 
document is close enough to the end of
+   * the TRA to trigger preemptive creation but not beyond all existing 
collections with a value of
+   * {@link CreationType#ASYNC_PREEMPTIVE}.
+   *
+   * @param routeTimestamp The timestamp from the document
+   * @param preemptiveCreateMath The date math indicating the {@link 
TimeRoutedAlias#preemptiveCreateMath}
+   * @return a {@code CreationType} indicating if and how to create a 
collection
+   */
+  private CreationType requiresCreateCollection(Instant routeTimestamp,  
String preemptiveCreateMath) {
--- End diff --

+1


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213398526
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -230,6 +188,95 @@ public void processAdd(AddUpdateCommand cmd) throws 
IOException {
 }
   }
 
+
+  private String createCollectionsIfRequired(Instant docTimestamp, String 
targetCollection, String printableId) {
+// Even though it is possible that multiple requests hit this code in 
the 1-2 sec that
+// it takes to create a collection, it's an established anti-pattern 
to feed data with a very large number
+// of client connections. This in mind, we only guard against spamming 
the overseer within a batch of
+// updates. We are intentionally tolerating a low level of redundant 
requests in favor of simpler code. Most
+// super-sized installations with many update clients will likely be 
multi-tenant and multiple tenants
+// probably don't write to the same alias. As such, we have deferred 
any solution to the "many clients causing
+// collection creation simultaneously" problem until such time as 
someone actually has that problem in a
+// real world use case that isn't just an anti-pattern.
+try {
+  CreationType creationType = requiresCreateCollection(docTimestamp, 
timeRoutedAlias.getPreemptiveCreateWindow());
+  switch (creationType) {
+case SYNCHRONOUS:
+  // This next line blocks until all collections required by the 
current document have been created
+  return maintain(targetCollection, docTimestamp, printableId, 
false);
+case ASYNC_PREEMPTIVE:
+  // Note: creating an executor and throwing it away is slightly 
expensive, but this is only likely to happen
+  // once per hour/day/week (depending on time slice size for the 
TRA). If the executor were retained, it
+  // would need to be shut down in a close hook to avoid test 
failures due to thread leaks which is slightly
+  // more complicated from a code maintenance and readability 
stand point. An executor must used instead of a
+  // thread to ensure we pick up the proper MDC logging stuff from 
ExecutorUtil. T
+  if (preemptiveCreationExecutor == null) {
+DefaultSolrThreadFactory threadFactory = new 
DefaultSolrThreadFactory("TRA-preemptive-creation");
+preemptiveCreationExecutor = 
newMDCAwareSingleThreadExecutor(threadFactory);
+preemptiveCreationExecutor.execute(() -> {
+  maintain(targetCollection, docTimestamp, printableId, true);
+  preemptiveCreationExecutor.shutdown();
+  preemptiveCreationExecutor = null;
+});
+  }
+  return targetCollection;
+case NONE:
+  return targetCollection; // just for clarity...
+default:
+  return targetCollection; // could use fall through, but fall 
through is fiddly for later editors.
+  }
+  // do nothing if creationType == NONE
+} catch (SolrException e) {
+  throw e;
+} catch (Exception e) {
+  throw new SolrException(SolrException.ErrorCode.SERVER_ERROR, e);
+}
+  }
+
+  /**
+   * Determine if the a new collection will be required based on the 
document timestamp. Passing null for
+   * preemptiveCreateInterval tells you if the document is beyond all 
existing collections with a response of
+   * {@link CreationType#NONE} or {@link CreationType#SYNCHRONOUS}, and 
passing a valid date math for
+   * preemptiveCreateMath additionally distinguishes the case where the 
document is close enough to the end of
+   * the TRA to trigger preemptive creation but not beyond all existing 
collections with a value of
+   * {@link CreationType#ASYNC_PREEMPTIVE}.
+   *
+   * @param routeTimestamp The timestamp from the document
+   * @param preemptiveCreateMath The date math indicating the {@link 
TimeRoutedAlias#preemptiveCreateMath}
+   * @return a {@code CreationType} indicating if and how to create a 
collection
+   */
+  private CreationType requiresCreateCollection(Instant routeTimestamp,  
String preemptiveCreateMath) {
+final Instant mostRecentCollTimestamp = 
parsedCollectionsDesc.get(0).getKey();
+final Instant nextCollTimestamp = 
timeRoutedAlias.computeNextCollTimestamp(mostRecentCollTimestamp);
+if (!routeTimestamp.isBefore(nextCollTimestamp)) {
+  // current document is destined for a collection that doesn't exist, 
must create the destination
+  // to proceed with this add command
+  return SYNCHRONOUS;
+}
+
+if (isBlank(preemptiveCreateMath)) {
--- End diff --
   

[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213415931
  
--- Diff: 
solr/core/src/test/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessorTest.java
 ---
@@ -322,6 +325,104 @@ public void testSliceRouting() throws Exception {
 }
   }
 
+  @Test
+  public void testPreemptiveCreation() throws Exception {
+String configName = TimeRoutedAliasUpdateProcessorTest.configName + 
getTestName();
+createConfigSet(configName);
+
+final int numShards = 1 ;
+final int numReplicas = 1 ;
+CollectionAdminRequest.createTimeRoutedAlias(alias, 
"2017-10-23T00:00:00Z", "+1DAY", timeField,
+CollectionAdminRequest.createCollection("_unused_", configName, 
numShards, numReplicas)
+
.setMaxShardsPerNode(numReplicas)).setPreemptiveCreateWindow("3HOUR")
+.process(solrClient);
+
+// cause some collections to be created
+assertUpdateResponse(solrClient.add(alias,
+sdoc("id","1","timestamp_dt", "2017-10-25T00:00:00Z")
+));
+assertUpdateResponse(solrClient.commit(alias));
+
+// wait for all the collections to exist...
+waitCol("2017-10-23", numShards);
+waitCol("2017-10-24", numShards);
+waitCol("2017-10-25", numShards);
+
+// normal update, nothing special, no collection creation required.
+List cols = new 
CollectionAdminRequest.ListAliases().process(solrClient).getAliasesAsLists().get(alias);
+assertEquals(3,cols.size());
+
+assertNumDocs("2017-10-23", 0);
+assertNumDocs("2017-10-24", 0);
+assertNumDocs("2017-10-25", 1);
+
+// cause some collections to be created
+
+ModifiableSolrParams params = params();
+assertUpdateResponse(add(alias, Arrays.asList(
+sdoc("id", "2", "timestamp_dt", "2017-10-24T00:00:00Z"),
+sdoc("id", "3", "timestamp_dt", "2017-10-25T00:00:00Z"),
+sdoc("id", "4", "timestamp_dt", "2017-10-23T00:00:00Z"),
+sdoc("id", "5", "timestamp_dt", "2017-10-25T23:00:00Z")), // 
should cause preemptive creation
--- End diff --

I think this comment is obsolete, carried over from earlier versions?


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r213403086
  
--- Diff: 
solr/core/src/java/org/apache/solr/cloud/api/collections/TimeRoutedAlias.java 
---
@@ -141,6 +145,9 @@ public TimeRoutedAlias(String aliasName, Map aliasMetadata) {
 
 //optional:
 maxFutureMs = params.getLong(ROUTER_MAX_FUTURE, 
TimeUnit.MINUTES.toMillis(10));
+// the date math configured is an interval to be subtracted from the 
most recent collection's time stamp
+preemptiveCreateMath = params.get(ROUTER_PREEMPTIVE_CREATE_MATH) != 
null ?
--- End diff --

ok


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #433: SOLR-12357 Premptive creation of collections ...

2018-08-28 Thread nsoft
Github user nsoft commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/433#discussion_r212805827
  
--- Diff: 
solr/core/src/java/org/apache/solr/update/processor/TimeRoutedAliasUpdateProcessor.java
 ---
@@ -94,13 +92,15 @@
   private final SolrCmdDistributor cmdDistrib;
   private final CollectionsHandler collHandler;
   private final SolrParams outParamsToLeader;
+  @SuppressWarnings("FieldCanBeLocal")
   private final CloudDescriptor cloudDesc;
 
   private List> parsedCollectionsDesc; // 
k=timestamp (start), v=collection.  Sorted descending
   private Aliases parsedCollectionsAliases; // a cached reference to the 
source of what we parse into parsedCollectionsDesc
   private SolrQueryRequest req;
+  private ExecutorService preemptiveCreationExecutor;
--- End diff --

No that's a good point


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release PyLucene 7.4.0 (rc1)

2018-08-28 Thread Andi Vajda



The PyLucene 7.4.0 (rc1) release tracking the recent release of
Apache Lucene 7.4.0 is ready.

A release candidate is available from:
  https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.4.0-rc1/

PyLucene 7.4.0 is built with JCC 3.2 included in these release artifacts.

JCC 3.2 supports Python 3.3+ (in addition to Python 2.3+).
PyLucene may be built with Python 2 or Python 3.

Please vote to release these artifacts as PyLucene 7.4.0.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1


RE: [nag][VOTE] Release PyLucene 7.2.0 (rc1)

2018-08-28 Thread Andi Vajda


On Mon, 8 Jan 2018, Andi Vajda wrote:



On Mon, 8 Jan 2018, Milo H. Fields III wrote:


Please excuse my ignorance of Apache process -- 'who/what' are PCM's?


http://www.apache.org/foundation/governance/pmcs.html

Three PMC votes are necessary to approve a release of Apache software.
So far, we've got one PMC vote on this release (mine).


This vote has now failed (after almost 9 months) for lack of PMC interest.
A Release candidate for pylucene 7.4.0 is ready and a release vote is about 
to start.


Andi..




I've built and am have been using on Win10 against Py3.6.4 & Py2.7.14  (jdk
1.8.0_152) without issue


Thank you for your input.

Andi..



v/r


-Original Message-
From: Petrus Hyvönen [mailto:petrus.hyvo...@gmail.com]
Sent: Monday, January 8, 2018 08:24
To: pylucene-dev@lucene.apache.org
Cc: gene...@lucene.apache.org
Subject: Re: [nag][VOTE] Release PyLucene 7.2.0 (rc1)

Just to encourage, Please PMC's vote so we can have a fresh release of JCC
also!

Many Thanks for you efforts,
/Petrus



On 4 Jan 2018, at 11:29 , Andi Vajda  wrote:


Two more PMC votes are needed to make this release !
Thanks !

-- Forwarded message --
Date: Thu, 21 Dec 2017 03:50:08 -0800 (PST)
From: Andi Vajda 
To: pylucene-dev@lucene.apache.org
Cc: gene...@lucene.apache.org
Subject: [VOTE] Release PyLucene 7.2.0 (rc1)


The PyLucene 7.2.0 (rc1) release tracking the upcoming release of
Apache Lucene 7.2.0 is ready.

A release candidate is available from:
 https://dist.apache.org/repos/dist/dev/lucene/pylucene/7.2.0-rc1/

PyLucene 7.2.0 is built with JCC 3.1 included in these release

artifacts.


JCC 3.1 supports Python 3.3+ (in addition to Python 2.3+).
PyLucene may be built with Python 2 or Python 3.

Please vote to release these artifacts as PyLucene 7.2.0.
Anyone interested in this release can and should vote !

Thanks !

Andi..

ps: the KEYS file for PyLucene release signing is at:
https://dist.apache.org/repos/dist/release/lucene/pylucene/KEYS
https://dist.apache.org/repos/dist/dev/lucene/pylucene/KEYS

pps: here is my +1




[jira] [Created] (SOLR-12709) Simulate a 1 bln docs scaling-up scenario

2018-08-28 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-12709:


 Summary: Simulate a 1 bln docs scaling-up scenario
 Key: SOLR-12709
 URL: https://issues.apache.org/jira/browse/SOLR-12709
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Edward Ribeiro (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595370#comment-16595370
 ] 

Edward Ribeiro commented on SOLR-5163:
--

Hi [~Charles Sanders], a couple of questions about your patch (congrats for 
contributing, btw!):


{code:java}
validateQueryFields(req);{code}
You pass req, but req is only used to get the Schema, so why not pass the 
schema, i.e., validateQueryFields(req.getSchema())?


{code:java}
protected void validateQueryFields(SolrQueryRequest req) throws SyntaxError {
 if (queryFields == null || queryFields.isEmpty()) {
throw new SyntaxError("No query fields given.");
 }{code}
If df is not specified then the parser will resort to use df (or throw an 
exception if neither is specified). Therefore, even tough this if clause is a 
nice defensive guard I don't think it really is worth now, because if 
queryFields is empty the error will be thrown before reaching this method. And 
even if is empty then the result is that the for-loop is not traversed.

Finally, 
{code:java}
req.getSchema().getFields().keySet(){code}
could be extracted to a variable before entering the loop, instead of being 
called for each field.

Best regards!

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12594) MetricsHistoryHandler.getOverseerLeader fails when hostname contains hyphen

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595327#comment-16595327
 ] 

Jan Høydahl commented on SOLR-12594:


This can be resolved again, not?

> MetricsHistoryHandler.getOverseerLeader fails when hostname contains hyphen
> ---
>
> Key: SOLR-12594
> URL: https://issues.apache.org/jira/browse/SOLR-12594
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.4
>Reporter: Hoss Man
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.5, 7.4.1
>
>
> as reported on the user list...
> {quote}
> We encounter a lot of log warning entries from the MetricsHistoryHandler 
> saying
> o.a.s.h.a.MetricsHistoryHandler Unknown format of leader id, skipping:
> 244550997187166214-server1-b.myhost:8983_solr-n_94
> I don't even know what this _MetricsHistoryHandler_ does, but at least 
> there's a warning.
> Looking at the code you can see that it has to fail if the hostname of the 
> node contains a hyphen:
> {quote}
> {code}
> String[] ids = oid.split("-");
> if (ids.length != 3) { // unknown format
>   log.warn("Unknown format of leader id, skipping: " + oid);
>   return null;
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12708) Async collection actions should not hide failures

2018-08-28 Thread Mano Kovacs (JIRA)
Mano Kovacs created SOLR-12708:
--

 Summary: Async collection actions should not hide failures
 Key: SOLR-12708
 URL: https://issues.apache.org/jira/browse/SOLR-12708
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Admin UI, Backup/Restore
Affects Versions: 7.4
Reporter: Mano Kovacs


Async collection API may hide failures compared to sync version. 
[OverseerCollectionMessageHandler::processResponses|https://github.com/apache/lucene-solr/blob/master/solr/core/src/java/org/apache/solr/cloud/api/collections/OverseerCollectionMessageHandler.java#L744]
 structures errors differently in the response, that hides failures from most 
evaluators. RestoreCmd did not receive, nor handle async addReplica issues.

Sample create collection sync and async result with invalid solrconfig.xml:
{noformat}
{
"responseHeader":{
"status":0,
"QTime":32104},
"failure":{
"localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://localhost:8983/solr: Error CREATEing SolrCore 
'name4_shard1_replica_n1': Unable to create core [name4_shard1_replica_n1] 
Caused by: The content of elements must consist of well-formed character data 
or markup.",
"localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://localhost:8983/solr: Error CREATEing SolrCore 
'name4_shard2_replica_n2': Unable to create core [name4_shard2_replica_n2] 
Caused by: The content of elements must consist of well-formed character data 
or markup.",
"localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://localhost:8983/solr: Error CREATEing SolrCore 
'name4_shard1_replica_n2': Unable to create core [name4_shard1_replica_n2] 
Caused by: The content of elements must consist of well-formed character data 
or markup.",
"localhost:8983_solr":"org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error
 from server at http://localhost:8983/solr: Error CREATEing SolrCore 
'name4_shard2_replica_n1': Unable to create core [name4_shard2_replica_n1] 
Caused by: The content of elements must consist of well-formed character data 
or markup."}
}
{noformat}
vs async:
{noformat}
{
"responseHeader":{
"status":0,
"QTime":3},
"success":{
"localhost:8983_solr":{
"responseHeader":{
"status":0,
"QTime":12}},
"localhost:8983_solr":{
"responseHeader":{
"status":0,
"QTime":3}},
"localhost:8983_solr":{
"responseHeader":{
"status":0,
"QTime":11}},
"localhost:8983_solr":{
"responseHeader":{
"status":0,
"QTime":12}}},
"myTaskId2709146382836":{
"responseHeader":{
"status":0,
"QTime":1},
"STATUS":"failed",
"Response":"Error CREATEing SolrCore 'name_shard2_replica_n2': Unable to create 
core [name_shard2_replica_n2] Caused by: The content of elements must consist 
of well-formed character data or markup."},
"status":{
"state":"completed",
"msg":"found [myTaskId] in completed tasks"}}
{noformat}
Proposing adding failure node to the results, keeping backward compatible but 
correct result.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12692) Add hints/warnings for the ZK Status Admin UI

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595318#comment-16595318
 ] 

Jan Høydahl commented on SOLR-12692:


I can see that ‘srst’ could be useful to trigger for all hosts before you start 
some test or reproduction of a problem. Greg, feel free to open an issue for it 
and another for ‘cons’, especially if you also want to attempt a patch :) 

> Add hints/warnings for the ZK Status Admin UI
> -
>
> Key: SOLR-12692
> URL: https://issues.apache.org/jira/browse/SOLR-12692
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12692.patch, wrong_zk_warning.png, zk_ensemble.png
>
>
> Firstly I love the new UI pages ( ZK Status and Nodes ) . Thanks [~janhoy] 
> for all the great work!
> I setup a 3 node ZK ensemble to play around with the UI and attaching the 
> screenshot for reference.
>  
> Here are a few suggestions I had
>  # Let’s show Approximate Size in human readable form.  We can use 
> RamUsageEstimator#humanReadableUnits to calculate it
>  # Show warning symbol when Ensemble is standalone
>  # If maxSessionTimeout < Solr's ZK_CLIENT_TIMEOUT then ZK will only honor 
> up-to the maxSessionTimeout value for the Solr->ZK connection. We could mark 
> that as a warning.
>  # If maxClientCnxns < live_nodes show this as a red? Each solr node connects 
> to all zk nodes so if the number of nodes in the cluster is high one should 
> also be increasing maxClientCnxns
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12707) Add hints/warnings for the Solr Nodes Admin UI

2018-08-28 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12707:


 Summary: Add hints/warnings for the Solr Nodes Admin UI
 Key: SOLR-12707
 URL: https://issues.apache.org/jira/browse/SOLR-12707
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Varun Thacker


Similar to SOLR-12692 , we should add hints and warnings for anomalies that we 
can detect on the solr nodes

[~gregharris73] and I are discussing this offline so this is a placeholder 
Jira. We'll put in some ideas here

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12692) Add hints/warnings for the ZK Status Admin UI

2018-08-28 Thread Greg Harris (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595291#comment-16595291
 ] 

Greg Harris commented on SOLR-12692:


Additional feature request would be if you could say do a click for a 'cons' 
command which will show latencies and packets rcvd/sent on all connections. 
This can be useful when determining if that max latency is an outlier or a 
significant problem or packet communication on a connection. You could also do 
ones for 'crst' (Connection reset of stats), 'srst' (Server reset of stats). 
Possibly might add 'dump' for connection ids and attached ephemeral nodes, but 
perhaps getting farther out there. I think the most important one here might 
just be 'cons'

> Add hints/warnings for the ZK Status Admin UI
> -
>
> Key: SOLR-12692
> URL: https://issues.apache.org/jira/browse/SOLR-12692
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Varun Thacker
>Priority: Minor
> Attachments: SOLR-12692.patch, wrong_zk_warning.png, zk_ensemble.png
>
>
> Firstly I love the new UI pages ( ZK Status and Nodes ) . Thanks [~janhoy] 
> for all the great work!
> I setup a 3 node ZK ensemble to play around with the UI and attaching the 
> screenshot for reference.
>  
> Here are a few suggestions I had
>  # Let’s show Approximate Size in human readable form.  We can use 
> RamUsageEstimator#humanReadableUnits to calculate it
>  # Show warning symbol when Ensemble is standalone
>  # If maxSessionTimeout < Solr's ZK_CLIENT_TIMEOUT then ZK will only honor 
> up-to the maxSessionTimeout value for the Solr->ZK connection. We could mark 
> that as a warning.
>  # If maxClientCnxns < live_nodes show this as a red? Each solr node connects 
> to all zk nodes so if the number of nodes in the cluster is high one should 
> also be increasing maxClientCnxns
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 829 - Unstable

2018-08-28 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/829/

3 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypesLink

Error Message:
unexpected shard state expected: but was:

Stack Trace:
java.lang.AssertionError: unexpected shard state expected: but 
was:
at 
__randomizedtesting.SeedInfo.seed([C2A843A389678319:FEC68AFA2CBF2880]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.verifyShard(ShardSplitTest.java:372)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.doSplitMixedReplicaTypes(ShardSplitTest.java:364)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitMixedReplicaTypesLink(ShardSplitTest.java:336)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1008)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:983)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Resolved] (SOLR-11911) TestLargeCluster.testSearchRate() failure

2018-08-28 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-11911.
--
Resolution: Fixed

This should be fixed now, please reopen if it appears again.

> TestLargeCluster.testSearchRate() failure
> -
>
> Key: SOLR-11911
> URL: https://issues.apache.org/jira/browse/SOLR-11911
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> My Jenkins found a branch_7x seed that reproduced 4/5 times for me:
> {noformat}
> Checking out Revision af9706cb89335a5aa04f9bcae0c2558a61803b50 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLargeCluster 
> -Dtests.method=testSearchRate -Dtests.seed=2D7724685882A83D -Dtests.slow=true 
> -Dtests.locale=be-BY -Dtests.timezone=Africa/Ouagadougou -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 1.24s J0  | TestLargeCluster.testSearchRate <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The trigger did not 
> fire at all
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([2D7724685882A83D:703F3AE197440E72]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testSearchRate(TestLargeCluster.java:547)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=be-BY, 
> timezone=Africa/Ouagadougou
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_151 (64-bit)/cpus=16,threads=1,free=388243840,total=502267904
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-08-28 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  resolved SOLR-12392.
--
Resolution: Fixed

> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595271#comment-16595271
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit 9c79275d867805491fe83bd4ec84411c9f617c71 in lucene-solr's branch 
refs/heads/branch_7x from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9c79275 ]

SOLR-12392: Fix several bugs in tests and in trigger event serialization.
Add better support for converting MapWriter instances to JSON.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595270#comment-16595270
 ] 

ASF subversion and git services commented on SOLR-12662:


Commit 6430749d46cda00bb591268ef3ade3386b927c73 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6430749 ]

SOLR-12662: Reproducing TestPolicy failures: NPE and NoClassDefFoundError


> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testUtilizeNodeFailure -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.02s J0 | TestPolicy.testUtilizeNodeFailure <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]

[jira] [Commented] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595268#comment-16595268
 ] 

ASF subversion and git services commented on SOLR-12662:


Commit f6c06ae5e58d2c461c7bb7333b82aead753fa295 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f6c06ae ]

SOLR-12662: Reproducing TestPolicy failures: NPE and NoClassDefFoundError


> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testUtilizeNodeFailure -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.02s J0 | TestPolicy.testUtilizeNodeFailure <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]  

[jira] [Assigned] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-28 Thread Steve Rowe (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe reassigned SOLR-12662:
-

  Assignee: Steve Rowe
Attachment: SOLR-12662.patch

The issue is dueling static initializers: a static initialization block in 
{{VariableBase}} attempts to get static enum {{Variable.Type}}'s entries via 
{{Type.values()}} before {{Type}} has had a chance to initialize itself.

The attached patch switches to lazy initialization: calling {{Type.values()}} 
is delayed until the first time they are need.

Committing shortly.

> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testUtilizeNodeFailure -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.02s J0 | TestPolicy.testUtilizeNodeFailure <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could 

[jira] [Comment Edited] (SOLR-12662) Reproducing TestPolicy failures: NPE and NoClassDefFoundError

2018-08-28 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12662?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595263#comment-16595263
 ] 

Steve Rowe edited comment on SOLR-12662 at 8/28/18 4:41 PM:


The issue is dueling static initializers: a static initialization block in 
{{VariableBase}} attempts to get static enum {{Variable.Type}}'s entries via 
{{Type.values()}} before {{Type}} has had a chance to initialize itself.

The attached patch switches to lazy initialization: calling {{Type.values()}} 
is delayed until the first time they are needed.

Committing shortly.


was (Author: steve_rowe):
The issue is dueling static initializers: a static initialization block in 
{{VariableBase}} attempts to get static enum {{Variable.Type}}'s entries via 
{{Type.values()}} before {{Type}} has had a chance to initialize itself.

The attached patch switches to lazy initialization: calling {{Type.values()}} 
is delayed until the first time they are need.

Committing shortly.

> Reproducing TestPolicy failures: NPE and NoClassDefFoundError
> -
>
> Key: SOLR-12662
> URL: https://issues.apache.org/jira/browse/SOLR-12662
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, Tests
>Reporter: Steve Rowe
>Assignee: Steve Rowe
>Priority: Major
> Attachments: SOLR-12662.patch
>
>
> From [https://builds.apache.org/job/Lucene-Solr-Tests-7.x/773/]:
> {noformat}
>[junit4] Suite: org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy
>[junit4]   2> Creating dataDir: 
> /x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-7.x/solr/build/solr-solrj/test/J0/temp/solr.client.solrj.cloud.autoscaling.TestPolicy_D876F0AD4FD0DF80-001/init-core-data-001
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testWithCollection -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.09s J0 | TestPolicy.testWithCollection <<<
>[junit4]> Throwable #1: java.lang.ExceptionInInitializerError
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:575A9671946EA761]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:242)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.(Variable.java:85)
>[junit4]>  at 
> org.apache.solr.client.solrj.impl.SolrClientNodeStateProvider.getNodeValues(SolrClientNodeStateProvider.java:130)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testWithCollection(TestPolicy.java:244)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>[junit4]> Caused by: java.lang.NullPointerException
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Variable$Type.values(Variable.java:84)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.VariableBase.(VariableBase.java:203)
>[junit4]>  ... 43 more
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestPolicy 
> -Dtests.method=testEmptyClusterState -Dtests.seed=D876F0AD4FD0DF80 
> -Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr 
> -Dtests.timezone=Europe/Busingen -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.07s J0 | TestPolicy.testEmptyClusterState <<<
>[junit4]> Throwable #1: java.lang.NoClassDefFoundError: Could not 
> initialize class org.apache.solr.client.solrj.cloud.autoscaling.VariableBase
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([D876F0AD4FD0DF80:39224A25B6C6C014]:0)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.Policy.(Policy.java:127)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.AutoScalingConfig.getPolicy(AutoScalingConfig.java:353)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.createSession(PolicyHelper.java:356)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper$SessionRef.get(PolicyHelper.java:321)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getSession(PolicyHelper.java:377)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.PolicyHelper.getReplicaLocations(PolicyHelper.java:113)
>[junit4]>  at 
> org.apache.solr.client.solrj.cloud.autoscaling.TestPolicy.testEmptyClusterState(TestPolicy.java:2185)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
>

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-10) - Build # 2645 - Unstable!

2018-08-28 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2645/
Java: 64bit/jdk-10 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestWithCollection.testMoveReplicaMainCollection

Error Message:
IOException occured when talking to server at: https://127.0.0.1:35301/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:35301/solr
at 
__randomizedtesting.SeedInfo.seed([6BB9CDDC86EF91D0:60D2F164F48C94BE]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1109)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:819)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.TestWithCollection.testMoveReplicaMainCollection(TestWithCollection.java:316)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Updated] (LUCENE-8460) Better argument validation in StoredField

2018-08-28 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8460:
---
Attachment: LUCENE-8460.patch

> Better argument validation in StoredField
> -
>
> Key: LUCENE-8460
> URL: https://issues.apache.org/jira/browse/LUCENE-8460
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Namgyu Kim
>Priority: Major
>  Labels: arguments/parameters, javadocs
> Attachments: LUCENE-8460.patch, LUCENE-8460.patch, LUCENE-8460.patch
>
>
> I have found some invalid Javadocs in StoredField Class.
>  (and I think Field Class has some problems too :D)
>  
> 1) Line 45 method
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name is null.
>  */
> protected StoredField(String name, FieldType type) {
>   super(name, type);
> }
> {code}
> It is misleading because there is no explanation for *type*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> /**
>  * Expert: creates a field with no initial value.
>  * ...
>  * @throws IllegalArgumentException if either the name or type
>  * is null.
>  */
> protected Field(String name, IndexableFieldType type) {
>   if (name == null) {
> throw new IllegalArgumentException("name must not be null");
>   }
>   this.name = name;
>   if (type == null) {
> throw new IllegalArgumentException("type must not be null");
>   }
>   this.type = type;
> }{code}
> Field class has the exception handling(IllegalArgumentException) for *null 
> IndexableFieldType object*.
>  For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name or type
>  * is null.
>  */
> protected StoredField(String name, FieldType type) {
>   super(name, type);
> }
> {code}
>  
> 2) Line 59 method
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name
>  */
> public StoredField(String name, BytesRef bytes, FieldType type) {
>   super(name, bytes, type);
> }
> {code}
> It is misleading because there is no explanation for *bytes*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> /**
>  * Create field with binary value.
>  *
>  * ...
>  * @throws IllegalArgumentException if the field name is null,
>  * or the field's type is indexed()
>  * @throws NullPointerException if the type is null
>  */
> public Field(String name, BytesRef bytes, IndexableFieldType type) {
>   if (name == null) {
> throw new IllegalArgumentException("name must not be null");
>   }
>   if (bytes == null) {
> throw new IllegalArgumentException("bytes must not be null");
>   }
>   this.fieldsData = bytes;
>   this.type = type;
>   this.name = name;
> }
> {code}
> Field class has the exception handling(IllegalArgumentException) for *null 
> BytesRef object*.
>  For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name or value
>  * is null.
>  */
> public StoredField(String name, BytesRef bytes, FieldType type) {
>   super(name, bytes, type);
> }
> {code}
>  
> 3) Line 71 method
> {code:java}
> /**
>  * Create a stored-only field with the given binary value.
>  * ...
>  * @throws IllegalArgumentException if the field name is null.
>  */
> public StoredField(String name, byte[] value) {
>   super(name, value, TYPE);
> }
> {code}
> It is misleading because there is no explanation for *byte array*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> public Field(String name, byte[] value, IndexableFieldType type) {
>   this(name, value, 0, value.length, type);
> }
> // To
> public Field(String name, byte[] value, int offset, int length, 
> IndexableFieldType type) {
>   this(name, new BytesRef(value, offset, length), type);
> }{code}
> When declaring a new BytesRef, an Illegal exception will be thrown if value 
> is null.
> {code:java}
> public BytesRef(byte[] bytes, int offset, int length) {
>   this.bytes = bytes;
>   this.offset = offset;
>   this.length = length;
>   assert isValid();
> }
> public boolean isValid() {
>   if (bytes == null) {
> throw new IllegalStateException("bytes is null");
>   }
>   ...
> }{code}
> For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Create a stored-only field with the given binary value.
>  * NOTE: the provided byte[] is not copied so be sure
>  * not to change it until you're done with this field.
>  * @param name field name
>  * @param 

[jira] [Updated] (LUCENE-8460) Better argument validation in StoredField

2018-08-28 Thread Namgyu Kim (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Namgyu Kim updated LUCENE-8460:
---
Attachment: (was: LUCENE-8460.patch)

> Better argument validation in StoredField
> -
>
> Key: LUCENE-8460
> URL: https://issues.apache.org/jira/browse/LUCENE-8460
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/other
>Reporter: Namgyu Kim
>Priority: Major
>  Labels: arguments/parameters, javadocs
> Attachments: LUCENE-8460.patch, LUCENE-8460.patch, LUCENE-8460.patch
>
>
> I have found some invalid Javadocs in StoredField Class.
>  (and I think Field Class has some problems too :D)
>  
> 1) Line 45 method
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name is null.
>  */
> protected StoredField(String name, FieldType type) {
>   super(name, type);
> }
> {code}
> It is misleading because there is no explanation for *type*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> /**
>  * Expert: creates a field with no initial value.
>  * ...
>  * @throws IllegalArgumentException if either the name or type
>  * is null.
>  */
> protected Field(String name, IndexableFieldType type) {
>   if (name == null) {
> throw new IllegalArgumentException("name must not be null");
>   }
>   this.name = name;
>   if (type == null) {
> throw new IllegalArgumentException("type must not be null");
>   }
>   this.type = type;
> }{code}
> Field class has the exception handling(IllegalArgumentException) for *null 
> IndexableFieldType object*.
>  For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name or type
>  * is null.
>  */
> protected StoredField(String name, FieldType type) {
>   super(name, type);
> }
> {code}
>  
> 2) Line 59 method
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name
>  */
> public StoredField(String name, BytesRef bytes, FieldType type) {
>   super(name, bytes, type);
> }
> {code}
> It is misleading because there is no explanation for *bytes*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> /**
>  * Create field with binary value.
>  *
>  * ...
>  * @throws IllegalArgumentException if the field name is null,
>  * or the field's type is indexed()
>  * @throws NullPointerException if the type is null
>  */
> public Field(String name, BytesRef bytes, IndexableFieldType type) {
>   if (name == null) {
> throw new IllegalArgumentException("name must not be null");
>   }
>   if (bytes == null) {
> throw new IllegalArgumentException("bytes must not be null");
>   }
>   this.fieldsData = bytes;
>   this.type = type;
>   this.name = name;
> }
> {code}
> Field class has the exception handling(IllegalArgumentException) for *null 
> BytesRef object*.
>  For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Expert: allows you to customize the {@link
>  * ...
>  * @throws IllegalArgumentException if the field name or value
>  * is null.
>  */
> public StoredField(String name, BytesRef bytes, FieldType type) {
>   super(name, bytes, type);
> }
> {code}
>  
> 3) Line 71 method
> {code:java}
> /**
>  * Create a stored-only field with the given binary value.
>  * ...
>  * @throws IllegalArgumentException if the field name is null.
>  */
> public StoredField(String name, byte[] value) {
>   super(name, value, TYPE);
> }
> {code}
> It is misleading because there is no explanation for *byte array*.
>  If you follow that super class, you can see the following code(Field class).
> {code:java}
> public Field(String name, byte[] value, IndexableFieldType type) {
>   this(name, value, 0, value.length, type);
> }
> // To
> public Field(String name, byte[] value, int offset, int length, 
> IndexableFieldType type) {
>   this(name, new BytesRef(value, offset, length), type);
> }{code}
> When declaring a new BytesRef, an Illegal exception will be thrown if value 
> is null.
> {code:java}
> public BytesRef(byte[] bytes, int offset, int length) {
>   this.bytes = bytes;
>   this.offset = offset;
>   this.length = length;
>   assert isValid();
> }
> public boolean isValid() {
>   if (bytes == null) {
> throw new IllegalStateException("bytes is null");
>   }
>   ...
> }{code}
> For that reason, I changed the Javadoc to:
> {code:java}
> /**
>  * Create a stored-only field with the given binary value.
>  * NOTE: the provided byte[] is not copied so be sure
>  * not to change it until you're done with this field.
>  * @param name field name
>  * 

Re: [jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Erick Erickson
The waiting for cores to close can be because you make a call to something
like coreContaimer.getCore but don't call close, perhaps in a finally
block. Shot in the dark.

On Tue, Aug 28, 2018, 08:26 David Smiley (JIRA)  wrote:

>
> [
> https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595133#comment-16595133
> ]
>
> David Smiley commented on SOLR-5163:
> 
>
> Thanks for contributing!
>
> I took a brief look at your patch.
>  * I think DisMaxQParser.parseQueryFields should include validation.  Note
> that this method is also used by ExtendedDismaxQParser.  This method
> already takes the schema so it has the necessary inputs.
>  * Only use the Solr schema "IndexSchema", don't go down to the Lucene
> level "FieldInfos".
>  * It's sufficient to call IndesSchema.getField(name).  It'll throw an
> exception if it can't, and that exception will be marked as a BAD_REQUEST
> as it should be.  This method handles dynamic fields; the approach you took
> would not.
>  * Missing a test
>
> RE "Timeout waiting for all directory ref counts to be released", I
> suspect there was an exception reported prior to that point?  Any way this
> error surprises me.  If after doing the above and adding a test, even if
> you still get this error, post the patch any way and I'll take a look then.
>
> > edismax should throw exception when qf refers to nonexistent field
> > --
> >
> > Key: SOLR-5163
> > URL: https://issues.apache.org/jira/browse/SOLR-5163
> > Project: Solr
> >  Issue Type: Bug
> >  Components: query parsers, search
> >Affects Versions: 4.10.4
> >Reporter: Steven Bower
> >Priority: Major
> >  Labels: newdev
> > Attachments: SOLR-5163.patch
> >
> >
> > query:
> > q=foo AND bar
> > qf=field1
> > qf=field2
> > defType=edismax
> > Where field1 exists and field2 doesn't..
> > will treat the AND as a term vs and operator
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595133#comment-16595133
 ] 

David Smiley commented on SOLR-5163:


Thanks for contributing!

I took a brief look at your patch.
 * I think DisMaxQParser.parseQueryFields should include validation.  Note that 
this method is also used by ExtendedDismaxQParser.  This method already takes 
the schema so it has the necessary inputs.
 * Only use the Solr schema "IndexSchema", don't go down to the Lucene level 
"FieldInfos".
 * It's sufficient to call IndesSchema.getField(name).  It'll throw an 
exception if it can't, and that exception will be marked as a BAD_REQUEST as it 
should be.  This method handles dynamic fields; the approach you took would not.
 * Missing a test

RE "Timeout waiting for all directory ref counts to be released", I suspect 
there was an exception reported prior to that point?  Any way this error 
surprises me.  If after doing the above and adding a test, even if you still 
get this error, post the patch any way and I'll take a look then.

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-5163:
--

Assignee: David Smiley

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Charles Sanders (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595117#comment-16595117
 ] 

Charles Sanders commented on SOLR-5163:
---

I'm new to the project and this is my first contribution, thank you for your 
patience.

I have attached a patch file with code changes to address the issue.  It is not 
a candidate to commit in its current state.  The code changes satisfy the test 
case provided in Cassandrea's comment above.  Using the 'techproducts' example, 
an exception is throw when query fields (qf) contains field 'series'.  But 
passes when field 'series_t' is used.  If other invalid fields are used, such 
as foo_t, the exception is thrown.  An exception is thrown for any query field 
not persisted to the index or defined in the schema.

However. There are unit test failures when executing 'ant 
-Dtestcase=TestExtendedDismaxParser test'.  The error raised is 
.  I believe this is due more to my lack of knowledge of the 
test framework than the actual code addition.  May need to add an @AfterClass 
method with some tear down code.  Not sure.  Maybe someone with more test 
framework knowledge can steer me in the right direction.

Please take a look at the patch and let me know if I have missed the boat 
completely.  Any help / instructions / advice greatly appreciated.

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-08-28 Thread Charles Sanders (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Sanders updated SOLR-5163:
--
Attachment: SOLR-5163.patch

> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12705) ParseDateFieldUpdateProcessorFactory does not work for atomic update values

2018-08-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595004#comment-16595004
 ] 

David Smiley commented on SOLR-12705:
-

bq. I can imagine other update processors also not working on atomic update 
values ?

Right; I think this is a design deficiency of atomic updates (and not with any 
one URP)

> ParseDateFieldUpdateProcessorFactory does not work for atomic update values
> ---
>
> Key: SOLR-12705
> URL: https://issues.apache.org/jira/browse/SOLR-12705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Major
>
> If I do a set atomic update operation on a date field , then  
> ParseDateFieldUpdateProcessorFactory fails to recognize the date and the 
> document fails to update.
> works:
> {code:java}
> [
> {"id": "1" , "date_dt" : "2018-08-08"}
> ]{code}
> Does not work:
> {code:java}
> [
> {"id": "1" , "date_dt": {"set": "2018-08-08"}}
> ]{code}
> Error:
> {code:java}
> ERROR - 2018-08-27 22:54:45.230; [c:gettingstarted s:shard1 r:core_node5 
> x:gettingstarted_shard1_replica_n2] 
> org.apache.solr.handler.RequestHandlerBase; 
> org.apache.solr.common.SolrException: Invalid Date String:'2018-08-08'
> at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:247)
> at org.apache.solr.util.DateMathParser.parseMath(DateMathParser.java:226)
> at org.apache.solr.schema.DatePointField.toNativeType(DatePointField.java:113)
> at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.doSet(AtomicUpdateDocumentMerger.java:317)
> at 
> org.apache.solr.update.processor.AtomicUpdateDocumentMerger.merge(AtomicUpdateDocumentMerger.java:106)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.getUpdatedDocument(DistributedUpdateProcessor.java:1350)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd(DistributedUpdateProcessor.java:1054)
> at 
> org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:633)
> at 
> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.processAdd(LogUpdateProcessorFactory.java:103)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AddSchemaFieldsUpdateProcessorFactory$AddSchemaFieldsUpdateProcessor.processAdd(AddSchemaFieldsUpdateProcessorFactory.java:475)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldNameMutatingUpdateProcessorFactory$1.processAdd(FieldNameMutatingUpdateProcessorFactory.java:75)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.FieldMutatingUpdateProcessor.processAdd(FieldMutatingUpdateProcessor.java:118)
> at 
> org.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:55)
> at 
> org.apache.solr.update.processor.AbstractDefaultValueUpdateProcessorFactory$DefaultValueUpdateProcessor.processAdd(AbstractDefaultValueUpdateProcessorFactory.java:92){code}
> I can imagine other update processors also not working on atomic update 
> values ?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8196) Add IntervalQuery and IntervalsSource to expose minimum interval semantics across term fields

2018-08-28 Thread Martin Hermann (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595002#comment-16595002
 ] 

Martin Hermann commented on LUCENE-8196:


First of all, I really like this implementation and the ideas that went into 
it. But as I have spent quite some time with the old span queries and their 
problems, I'd like to comment on some things and maybe offer some fresh view 
points for old problems:
 
 
Obviously, maxwidth is not completely identical to specifying slop: Let's say 
we want to do some sort of synonym expansion and query for "("big bad" OR evil) 
wolf" (this is of course related to the prefix-problem we already know about 
("genome editing"), but I think still slightly different).
With span queries, this would have been possible, as we just have to set slop 
to 0 in all queries, but now we have to do something like
{code:java}
Intervals.maxwidth(3,
    Intervals.ordered(
    Intervals.or(
    Intervals.maxwidth(2,
    
Intervals.ordered(Intervals.term("big"),Intervals.term("bad"))),
    Intervals.term("evil")),
    Intervals.term("wolf")));{code}
which also matches "evil eyes wolf", which should not be a match. It would be 
possible to rewrite the query so that the disjunction is at the top level, 
something like
{code:java}
Intervals.or(
    Intervals.maxwidth(2,
    Intervals.ordered(Intervals.term("evil"),Intervals.term("wolf"))),
    Intervals.maxwidth(3,
    
Intervals.ordreed(Intervals.term("big"),Intervals.term("bad"),Intervals.term("wolf";{code}

which would work as expected, but I think we can agree that this is not really 
a nice solution (but I will come back to it later).

 

Now, we already know that "(big OR "big bad") wolf" would not match "big bad 
wolf" (this is exactly the genome editing thing), but I think it is worth to 
point out exactly why: It actually should not match, according to the 
definition of "minimum interval": Any match for "big bad" is also a match for 
big, so the first IntervalsSource only passes matches for "big", and then we 
get no match for "big wolf". This is a feature of the query semantic of the 
paper (and maybe the reason for the efficency and simplicity of the 
algorithms): The problems that spanQueries had are gone, because we define the 
unexpected behaviour to be correct*. As much as I like the IntervalQueries, I 
do not really think this is satisfactory.

 
There are actually other, similar cases with containing/containedBy: Let's say 
our document is "big bad big wolf" and we want "bad wolf" (slop 1) to be 
contained by "big wolf" (slop 2). We would get no match in this document, as 
the minimal match for the big interval is just "big wolf" (as the other match, 
"big bad big wolf" contains this one). At least to me this is counter intuitive 
and I would expect the document to match.
It really gets strange if we mix in some "OR":
{noformat}
"big wolf" (slop 1) contained in ("big wolf" (slop 1) OR "bad wolf") {noformat}
does not match "big bad wolf", in contrast to
{noformat}
"big wolf" (slop 1) contained in ("big wolf" (slop 1)){noformat}
, which does. So we actually lose a match by adding a OR-clause, and I think we 
can agree that this is not really good. Of course these are not queries a human 
would write, but I think one major use case of span queries is some sort of 
automatic query generation, and that's where I think it is really important to 
meet at least some basic expectations (such as not losing matches by adding 
disjunctions).
 
I don't see a way to fix this that still follows minimal interval semantics, as 
all this is actually how it SHOULD work there, but this would mean we'd lose 
the correctness proofs. The only thing I can think of is some sort of query 
rewriting, pushing the disjunction as far top as neccessary, but this may be 
rather performance heavy and also does not solve the "bad wolf" (slop 1) 
contained by "big wolf" (slop 2) problem.
 
Any thoughts?

*A short theoretical aside: I think that most of the span query problems came 
from the fact that we want to have a "next match" function, i.e. some sort of 
ordering of matches, together with the nature of span query Matches, which are 
essentially a pair of numbers (start and end of match). This means we have to 
specify an order on pairs of numbers (which is possible, of course; the 
solution with span queries was a lexical order, i.e. the start always 
increases, and if it stays the same, the end increases). But I think it is not 
really possible to implement completly lazy behaviour with this ordering: Think 
of some ordered "((a OR b) followed by (c OR d)) with enough slop" and the 
document "a b c d" which should find "a b c d" before "b c" (as the start 
increases), but has to cache the match for "c", which (in the sub-query "(c OR 
d)) occurs before the one for "b". So the combination of 

[jira] [Commented] (SOLR-12519) Support Deeply Nested Docs In Child Documents Transformer

2018-08-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16595001#comment-16595001
 ] 

David Smiley commented on SOLR-12519:
-

The PR looks good; I'm glad we can support adding nested child docs even on a 
search result document that is not itself the root doc (and is tested).  I'll 
do some precommit & tests and commit later today.  I think we'll both be 
relieved that this issue is going to be done soon :-)

> Support Deeply Nested Docs In Child Documents Transformer
> -
>
> Key: SOLR-12519
> URL: https://issues.apache.org/jira/browse/SOLR-12519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12519-fix-solrj-tests.patch, 
> SOLR-12519-no-commit.patch, SOLR-12519.patch
>
>  Time Spent: 24h 40m
>  Remaining Estimate: 0h
>
> As discussed in SOLR-12298, to make use of the meta-data fields in 
> SOLR-12441, there needs to be a smarter child document transformer, which 
> provides the ability to rebuild the original nested documents' structure.
>  In addition, I also propose the transformer will also have the ability to 
> bring only some of the original hierarchy, to prevent unnecessary block join 
> queries. e.g.
> {code}  {"a": "b", "c": [ {"e": "f"}, {"e": "g"} , {"h": "i"} ]} {code}
>  Incase my query is for all the children of "a:b", which contain the key "e" 
> in them, the query will be broken in to two parts:
>  1. The parent query "a:b"
>  2. The child query "e:*".
> If the only children flag is on, the transformer will return the following 
> documents:
>  {code}[ {"e": "f"}, {"e": "g"} ]{code}
> In case the flag was not turned on(perhaps the default state), the whole 
> document hierarchy will be returned, containing only the matching children:
> {code}{"a": "b", "c": [ {"e": "f"}, {"e": "g"} ]{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213321225
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformer.java 
---
@@ -109,9 +109,14 @@ public void transform(SolrDocument rootDoc, int 
rootDocId) {
   // Loop each child ID up to the parent (exclusive).
   for (int docId = calcDocIdToIterateFrom(lastChildId, rootDocId); 
docId < rootDocId; ++docId) {
 
-// get the path.  (note will default to ANON_CHILD_KEY if not in 
schema or oddly blank)
+// get the path.  (note will default to ANON_CHILD_KEY if schema 
is not nested or empty string if blank)
 String fullDocPath = getPathByDocId(docId - segBaseId, 
segPathDocValues);
 
+if(isNestedSchema && !fullDocPath.contains(transformedDocPath)) {
+  // is not a descendant of the transformed doc, return fast.
+  return;
--- End diff --

Added another query to 
[TestChildDocumentHierarchy#testNonRootChildren](https://github.com/apache/lucene-solr/pull/416/files#diff-9fe0ab006f82be5c6a07d5bb6dbc6da0R243).
This test failed before I changed the return to continue(previous commit), 
and passes using the latest.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213319927
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformer.java 
---
@@ -99,6 +96,9 @@ public void transform(SolrDocument rootDoc, int 
rootDocId) {
 
   // we'll need this soon...
   final SortedDocValues segPathDocValues = 
DocValues.getSorted(leafReaderContext.reader(), NEST_PATH_FIELD_NAME);
+  // passing a different SortedDocValues obj since the child documents 
which come after are of smaller docIDs,
+  // and the iterator can not be reversed.
+  final String transformedDocPath = getPathByDocId(segRootId, 
DocValues.getSorted(leafReaderContext.reader(), NEST_PATH_FIELD_NAME));
--- End diff --

Sure thing.
Done :-)


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11801) support customisation of the "highlighting" query response element

2018-08-28 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594991#comment-16594991
 ] 

David Smiley commented on SOLR-11801:
-

I just filed SOLR-12706 and this test seems to have caused it many times, 
although I don't know what the relationship is.

> support customisation of the "highlighting" query response element
> --
>
> Key: SOLR-11801
> URL: https://issues.apache.org/jira/browse/SOLR-11801
> Project: Solr
>  Issue Type: New Feature
>  Components: highlighter
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: 7.3, master (8.0)
>
> Attachments: 17766 jenkins.log, 2618 jenkins.log, SOLR-11801.patch, 
> SOLR-11801.patch, SOLR-11801.patch, SOLR-11801.patch
>
>
> The objective and use case behind the proposed changes is to be able to 
> receive not the out-of-the-box highlighting map
> {code}
> {
>   ...
>   "highlighting" : {
> "MA147LL/A" : {
>   "manu" : [
> "Apple Computer Inc."
>   ]
> }
>   }
> }
> {code}
> as illustrated in 
> https://lucene.apache.org/solr/guide/7_2/highlighting.html#highlighting-in-the-query-response
>  but to be able to alternatively name and customise the highlighting element 
> of the query response to (for example) be like this
> {code}
> {
>   ...
>   "custom_highlighting" : [
> {
>   "id" : "MA147LL/A",
>   "snippets" : {
> "manu" : [
>   "Apple Computer Inc."
> ]
>   }
> }
>   ]
> }
> {code}
> where the highlighting element itself is a list and where the keys of each 
> list element are 'knowable' in advance i.e. they are not 'unknowable' 
> document ids.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12706) CloudSolrClient NPE exception when NOT directUpdatesToLeadersOnly

2018-08-28 Thread David Smiley (JIRA)
David Smiley created SOLR-12706:
---

 Summary: CloudSolrClient NPE exception when NOT 
directUpdatesToLeadersOnly
 Key: SOLR-12706
 URL: https://issues.apache.org/jira/browse/SOLR-12706
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: David Smiley


I've seen multiple various tests fail with a NPE with this stack trace:
{noformat}
Caused by: java.lang.NullPointerException
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.buildUrlMap(CloudSolrClient.java:641)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:502)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1018)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:886)
{noformat}

Line 641 is: {{if (!replica.getNodeName().equals(leader.getNodeName()) &&}} in 
a loop that is under a condition {{if (!directUpdatesToLeadersOnly) {}}

Searching emails of failed reports for "CloudSolrClient.buildUrlMap" will turn 
up various failure reports. The first such email in recent times occurred 
August 3rd and since then, multiple times.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-08-28 Thread Dawid Weiss (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594953#comment-16594953
 ] 

Dawid Weiss commented on LUCENE-8468:
-

Yeah, this duplication is terrible, especially in exception handlers. Good 
catch.

> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594951#comment-16594951
 ] 

ASF subversion and git services commented on LUCENE-8468:
-

Commit 86efdaa6b63d3cd67bc78fba1b31036d65b17f67 in lucene-solr's branch 
refs/heads/branch_7x from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=86efdaa ]

LUCENE-8468: use NoSuchFileException instead of FileNotFoundException.


> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594952#comment-16594952
 ] 

ASF subversion and git services commented on LUCENE-8468:
-

Commit ca54137c8e643edcaf94f98cf976489581493492 in lucene-solr's branch 
refs/heads/master from [~dawid.weiss]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ca54137 ]

LUCENE-8468: use NoSuchFileException instead of FileNotFoundException.


> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594950#comment-16594950
 ] 

ASF subversion and git services commented on SOLR-12392:


Commit 8d1dce933f06c204de9797d14d2bdce336e553c0 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8d1dce9 ]

SOLR-12392: Fix several bugs in tests and in trigger event serialization.
Add better support for converting MapWriter instances to JSON.


> IndexSizeTriggerTest fails too frequently.
> --
>
> Key: SOLR-12392
> URL: https://issues.apache.org/jira/browse/SOLR-12392
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (8.0), 7.5
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #416: WIP: SOLR-12519

2018-08-28 Thread moshebla
Github user moshebla commented on a diff in the pull request:

https://github.com/apache/lucene-solr/pull/416#discussion_r213306038
  
--- Diff: 
solr/core/src/java/org/apache/solr/response/transform/ChildDocTransformer.java 
---
@@ -109,9 +109,14 @@ public void transform(SolrDocument rootDoc, int 
rootDocId) {
   // Loop each child ID up to the parent (exclusive).
   for (int docId = calcDocIdToIterateFrom(lastChildId, rootDocId); 
docId < rootDocId; ++docId) {
 
-// get the path.  (note will default to ANON_CHILD_KEY if not in 
schema or oddly blank)
+// get the path.  (note will default to ANON_CHILD_KEY if schema 
is not nested or empty string if blank)
 String fullDocPath = getPathByDocId(docId - segBaseId, 
segPathDocValues);
 
+if(isNestedSchema && !fullDocPath.contains(transformedDocPath)) {
+  // is not a descendant of the transformed doc, return fast.
+  return;
--- End diff --

Yep, you're right.
I'll investigate further to see why a test did not fail because of this.


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-08-28 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594947#comment-16594947
 ] 

Robert Muir commented on LUCENE-8468:
-

I think we had a race condition in our comments, thank you :)

> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8468) A ByteBuffer based Directory implementation (and associated classes)

2018-08-28 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16594943#comment-16594943
 ] 

Robert Muir commented on LUCENE-8468:
-

Yes but it seems really wrong for some methods to throw FileNotFoundException 
and others to throw NoSuchFileException. I don't see any good reason to ever 
use FileNotFoundException

> A ByteBuffer based Directory implementation (and associated classes)
> 
>
> Key: LUCENE-8468
> URL: https://issues.apache.org/jira/browse/LUCENE-8468
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Fix For: 7.5
>
> Attachments: LUCENE-8468.patch
>
>
> A factored-out sub-patch with ByteBufferDirectory and associated index 
> inputs, outputs, etc. and tests. No refactorings or cleanups to any other 
> classes (these will go in to master after 8.0 branch is cut).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >