[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 24 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/24/

4 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([3E8DF0F22C129956:B6D9CF2882EEF4AE]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:174)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Assigned] (SOLR-11069) LASTPROCESSEDVERSION for CDCR is flawed when buffering is enabled

2017-08-07 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson reassigned SOLR-11069:
-

Assignee: Erick Erickson

> LASTPROCESSEDVERSION for CDCR is flawed when buffering is enabled
> -
>
> Key: SOLR-11069
> URL: https://issues.apache.org/jira/browse/SOLR-11069
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0
>Reporter: Amrit Sarkar
>Assignee: Erick Erickson
>
> {{LASTPROCESSEDVERSION}} (a.b.v. LPV) action for CDCR breaks down due to 
> poorly initialised and maintained buffer log for either source or target 
> cluster core nodes.
> If buffer is enabled for cores of either source or target cluster, it return 
> {{-1}}, *irrespective of number of entries in tlog read by the {{leader}}* 
> node of each shard of respective collection of respective cluster. Once 
> disabled, it starts telling us the correct LPV for each core.
> Due to the same flawed behavior, Update Log Synchroniser may doesn't work 
> properly as expected, i.e. provides incorrect seek to the {{non-leader}} 
> nodes to advance at. I am not sure whether this is an intended behavior for 
> sync but it surely doesn't feel right.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-07 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11177.
---
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)
   6.7

> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 24 - Still Failing

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/24/

No tests ran.

Build Log:
[...truncated 25710 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (54.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-7.1.0-src.tgz...
   [smoker] 29.5 MB in 0.02 sec (1257.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.1.0.tgz...
   [smoker] 69.0 MB in 0.06 sec (1094.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-7.1.0.zip...
   [smoker] 79.4 MB in 0.06 sec (1261.0 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-7.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6171 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.1.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6171 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-7.1.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (269.8 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-7.1.0-src.tgz...
   [smoker] 50.3 MB in 0.04 sec (1140.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.1.0.tgz...
   [smoker] 142.5 MB in 0.12 sec (1158.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-7.1.0.zip...
   [smoker] 143.5 MB in 0.12 sec (1168.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-7.1.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-7.1.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/build/smokeTestRelease/tmp/unpack/solr-7.1.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]  
   [smoker] Started Solr server on port 

[jira] [Comment Edited] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-07 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117815#comment-16117815
 ] 

Jason Gerlowski edited comment on SOLR-11206 at 8/8/17 3:55 AM:


*Questions/Concerns/Thoughts*

Most of these are just some notes I wanted to jot down for my own benefit.  
Though they may provide helpful for others to catch my 
mistakes/misconceptions...

* AFAIK, there are no tests for the bin/solr scripts themselves, are there?  
I'm concerned about inadvertently introducing bugs that will cause user issues 
down the road.  Looks like a Catch-22 of sorts: moving the logic to Java will 
allow it to be better tested, but it's difficult to refactor with confidence 
because of the current test situation.  With that in mind, one of my first 
steps here might be to put together a script which exercises the {{bin/solr}} 
commands in many ways.  It's obviously not feasible to capture all (or even 
most) cases, but a gap/hole-ridden benchmark is better than none at all.
* A "benchmark" script like the one suggested above could be used to diff the 
output before and after this refactor, to ensure that the output isn't changing 
in any ways we don't expect/anticipate/want.  Do the backcompat guarantees made 
elsewhere in Solr extend to the output of these scripts as well?  Or is there 
not a rigid expectation around the Solr control scripts?
* I suspect I might run into some discrepancies in behavior between the two 
bin/solr implementations.  I suppose these will just have to be handled on a 
case by case basis (as far as determining which behavior should be taken 
forward.)


was (Author: gerlowskija):
*Questions/Concerns/Thoughts*

Most of these are just some notes I wanted to jot down for my own benefit.  
Though they may provide helpful for others to catch my 
mistakes/misconceptions...

* AFAIK, there are no tests for the bin/solr scripts themselves, are there?  
I'm concerned about inadvertently introducing bugs that will cause users issues 
down the road.  With that in mind, one of my first steps here might be to put 
together a script which exercises the {{bin/solr}} commands in many ways.  It's 
obviously not feasible to capture all cases, but a gap/hole-ridden benchmark is 
better than none at all.
* A "benchmark" script like the one suggested above could be used to diff the 
output before and after this refactor, to ensure that the output isn't changing 
in any ways we don't expect/anticipate/want.  Do the backcompat guarantees made 
elsewhere in Solr extend to the output of these scripts as well?  Or is there 
not a rigid expectation around that?
* I suspect I might run into some discrepancies in behavior between the two 
bin/solr implementations.  I suppose these will just have to be handled on a 
case by case basis (as far as determining which behavior should be taken 
forward.)

> Migrate logic from bin/solr scripts to SolrCLI
> --
>
> Key: SOLR-11206
> URL: https://issues.apache.org/jira/browse/SOLR-11206
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jason Gerlowski
> Fix For: master (8.0)
>
>
> The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic 
> that would be easier to maintain if it was instead written in Java code, for 
> a handful of reasons
> * Any logic in the control scripts is duplicated in two places by definition.
> * Increasing test coverage of this logic would be much easier if it was 
> written in Java.
> * Few developers are conversant in both bash and Windows-shell, making 
> editing difficult.
> Some sections in these scripts make good candidates for migration to Java.  
> This issue should examine any of these that are brought up.  However the 
> biggest and most obvious candidate for migration is the argument parsing, 
> validation, usage/help text, etc. for the commands that don't directly 
> start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", 
> "assert" commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-07 Thread Jason Gerlowski (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117815#comment-16117815
 ] 

Jason Gerlowski commented on SOLR-11206:


*Questions/Concerns/Thoughts*

Most of these are just some notes I wanted to jot down for my own benefit.  
Though they may provide helpful for others to catch my 
mistakes/misconceptions...

* AFAIK, there are no tests for the bin/solr scripts themselves, are there?  
I'm concerned about inadvertently introducing bugs that will cause users issues 
down the road.  With that in mind, one of my first steps here might be to put 
together a script which exercises the {{bin/solr}} commands in many ways.  It's 
obviously not feasible to capture all cases, but a gap/hole-ridden benchmark is 
better than none at all.
* A "benchmark" script like the one suggested above could be used to diff the 
output before and after this refactor, to ensure that the output isn't changing 
in any ways we don't expect/anticipate/want.  Do the backcompat guarantees made 
elsewhere in Solr extend to the output of these scripts as well?  Or is there 
not a rigid expectation around that?
* I suspect I might run into some discrepancies in behavior between the two 
bin/solr implementations.  I suppose these will just have to be handled on a 
case by case basis (as far as determining which behavior should be taken 
forward.)

> Migrate logic from bin/solr scripts to SolrCLI
> --
>
> Key: SOLR-11206
> URL: https://issues.apache.org/jira/browse/SOLR-11206
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jason Gerlowski
> Fix For: master (8.0)
>
>
> The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic 
> that would be easier to maintain if it was instead written in Java code, for 
> a handful of reasons
> * Any logic in the control scripts is duplicated in two places by definition.
> * Increasing test coverage of this logic would be much easier if it was 
> written in Java.
> * Few developers are conversant in both bash and Windows-shell, making 
> editing difficult.
> Some sections in these scripts make good candidates for migration to Java.  
> This issue should examine any of these that are brought up.  However the 
> biggest and most obvious candidate for migration is the argument parsing, 
> validation, usage/help text, etc. for the commands that don't directly 
> start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", 
> "assert" commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-9824) Documents indexed in bulk are replicated using too many HTTP requests

2017-08-07 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-9824.

   Resolution: Fixed
Fix Version/s: 7.0

I'm changing the issue status now to ensure it's clear that this problem is 
resolved in at least one version (to be released soon).  If someone has time, 
it can be back-ported later with fix-versions edited.

> Documents indexed in bulk are replicated using too many HTTP requests
> -
>
> Key: SOLR-9824
> URL: https://issues.apache.org/jira/browse/SOLR-9824
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.3
>Reporter: David Smiley
>Assignee: Mark Miller
> Fix For: 7.0
>
> Attachments: SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, 
> SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, SOLR-9824.patch, 
> SOLR-9824-tflobbe.patch
>
>
> This takes awhile to explain; bear with me. While working on bulk indexing 
> small documents, I looked at the logs of my SolrCloud nodes.  I noticed that 
> shards would see an /update log message every ~6ms which is *way* too much.  
> These are requests from one shard (that isn't a leader/replica for these docs 
> but the recipient from my client) to the target shard leader (no additional 
> replicas).  One might ask why I'm not sending docs to the right shard in the 
> first place; I have a reason but it's besides the point -- there's a real 
> Solr perf problem here and this probably applies equally to 
> replicationFactor>1 situations too.  I could turn off the logs but that would 
> hide useful stuff, and it's disconcerting to me that so many short-lived HTTP 
> requests are happening, somehow at the bequest of DistributedUpdateProcessor. 
>  After lots of analysis and debugging and hair pulling, I finally figured it 
> out.  
> In SOLR-7333 ([~tpot]) introduced an optimization called 
> {{UpdateRequest.isLastDocInBatch()}} in which ConcurrentUpdateSolrClient will 
> poll with a '0' timeout to the internal queue, so that it can close the 
> connection without it hanging around any longer than needed.  This part makes 
> sense to me.  Currently the only spot that has the smarts to set this flag is 
> {{JavaBinUpdateRequestCodec.unmarshal.readOuterMostDocIterator()}} at the 
> last document.  So if a shard received docs in a javabin stream (but not 
> other formats) one would expect the _last_ document to have this flag.  
> There's even a test.  Docs without this flag get the default poll time; for 
> javabin it's 25ms.  Okay.
> I _suspect_ that if someone used CloudSolrClient or HttpSolrClient to send 
> javabin data in a batch, the intended efficiencies of SOLR-7333 would apply.  
> I didn't try. In my case, I'm using ConcurrentUpdateSolrClient (and BTW 
> DistributedUpdateProcessor uses CUSC too).  CUSC uses the RequestWriter 
> (defaulting to javabin) to send each document separately without any leading 
> marker or trailing marker.  For the XML format by comparison, there is a 
> leading and trailing marker ( ... ).  Since there's no outer 
> container for the javabin unmarshalling to detect the last document, it marks 
> _every_ document as {{req.lastDocInBatch()}}!  Ouch!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.0-Linux (64bit/jdk1.8.0_141) - Build # 164 - Unstable!

2017-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.0-Linux/164/
Java: 64bit/jdk1.8.0_141 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 1) 
Thread[id=23552, name=jetty-launcher-3249-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=23550, name=jetty-launcher-3249-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth] at 
sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.security.hadoop.TestImpersonationWithHadoopAuth: 
   1) Thread[id=23552, name=jetty-launcher-3249-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestImpersonationWithHadoopAuth]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[JENKINS] Lucene-Solr-Tests-master - Build # 2066 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2066/

2 tests failed.
FAILED:  org.apache.solr.cloud.MultiThreadedOCPTest.test

Error Message:


Stack Trace:
java.lang.AssertionError
at 
__randomizedtesting.SeedInfo.seed([B258EF9411F2EE33:3A0CD04EBF0E83CB]:0)
at org.junit.Assert.fail(Assert.java:92)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertTrue(Assert.java:54)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.testFillWorkQueue(MultiThreadedOCPTest.java:111)
at 
org.apache.solr.cloud.MultiThreadedOCPTest.test(MultiThreadedOCPTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 

[jira] [Resolved] (LUCENE-7916) CompositeBreakIterator is brittle under ICU4J upgrade.

2017-08-07 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7916.
-
Resolution: Fixed

Thanks [~ckoenig42] !

> CompositeBreakIterator is brittle under ICU4J upgrade.
> --
>
> Key: LUCENE-7916
> URL: https://issues.apache.org/jira/browse/LUCENE-7916
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6
>Reporter: Chris Koenig
> Fix For: master (8.0), 7.1
>
> Attachments: LUCENE-7916.patch, LUCENE-7916.patch
>
>
> We use lucene-analyzers-icu version 6.6.0 in our project. Lucene 6.6.0 is 
> built against ICU4J version 56.1, but our use case requires us to use the 
> latest version of ICU4J, 59.1.
> The problem that we have encountered is that 
> CompositeBreakIterator.getBreakIterator(int scriptCode) throws an 
> ArrayIndexOutOfBoundsException for script codes higher than 167. In ICU4J 
> 56.1 the highest possible script code is 166, but in ICU4j 59.1 it is 174.
> Internally, CompositeBreakIterator is creating an array of size 
> UScript.CODE_LIMIT, but the value of CODE_LIMIT from ICU4J 56.1 is being 
> baked into the bytecode by the compiler. So even after overriding the version 
> of the ICU4J dependency to 59.1 in our project, this array will still be size 
> 167, which is too small.
> {code}
> final class CompositeBreakIterator {
>   private final ICUTokenizerConfig config;
>   private final BreakIteratorWrapper wordBreakers[] = new 
> BreakIteratorWrapper[UScript.CODE_LIMIT];
> {code}
> Output of javap run on CompositeBreakIterator.class from 
> lucene-analyzers-icu-6.6.0.jar
> {code}
> Compiled from "CompositeBreakIterator.java"
> final class 
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator {
>   
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator(org.apache.lucene.analysis.icu.segmentation.ICUTokenizerConfig);
> descriptor: 
> (Lorg/apache/lucene/analysis/icu/segmentation/ICUTokenizerConfig;)V
> Code:
>0: aload_0
>1: invokespecial #1  // Method 
> java/lang/Object."":()V
>4: aload_0
>5: sipush167
>8: anewarray #3  // class 
> org/apache/lucene/analysis/icu/segmentation/BreakIteratorWrapper
> {code}
> In our case, the ArrayIndexOutOfBoundsException was triggered when we 
> encountered a stray character of the Bhaiksuki script (script code 168) in a 
> chunk of text that we processed.
> CompositeBreakIterator can be made more resilient by changing the type of 
> wordBreakers from an array to a Map and no longer relying on the value of 
> UScript.CODE_LIMIT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7916) CompositeBreakIterator is brittle under ICU4J upgrade.

2017-08-07 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-7916:

Fix Version/s: 7.1
   master (8.0)

> CompositeBreakIterator is brittle under ICU4J upgrade.
> --
>
> Key: LUCENE-7916
> URL: https://issues.apache.org/jira/browse/LUCENE-7916
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6
>Reporter: Chris Koenig
> Fix For: master (8.0), 7.1
>
> Attachments: LUCENE-7916.patch, LUCENE-7916.patch
>
>
> We use lucene-analyzers-icu version 6.6.0 in our project. Lucene 6.6.0 is 
> built against ICU4J version 56.1, but our use case requires us to use the 
> latest version of ICU4J, 59.1.
> The problem that we have encountered is that 
> CompositeBreakIterator.getBreakIterator(int scriptCode) throws an 
> ArrayIndexOutOfBoundsException for script codes higher than 167. In ICU4J 
> 56.1 the highest possible script code is 166, but in ICU4j 59.1 it is 174.
> Internally, CompositeBreakIterator is creating an array of size 
> UScript.CODE_LIMIT, but the value of CODE_LIMIT from ICU4J 56.1 is being 
> baked into the bytecode by the compiler. So even after overriding the version 
> of the ICU4J dependency to 59.1 in our project, this array will still be size 
> 167, which is too small.
> {code}
> final class CompositeBreakIterator {
>   private final ICUTokenizerConfig config;
>   private final BreakIteratorWrapper wordBreakers[] = new 
> BreakIteratorWrapper[UScript.CODE_LIMIT];
> {code}
> Output of javap run on CompositeBreakIterator.class from 
> lucene-analyzers-icu-6.6.0.jar
> {code}
> Compiled from "CompositeBreakIterator.java"
> final class 
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator {
>   
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator(org.apache.lucene.analysis.icu.segmentation.ICUTokenizerConfig);
> descriptor: 
> (Lorg/apache/lucene/analysis/icu/segmentation/ICUTokenizerConfig;)V
> Code:
>0: aload_0
>1: invokespecial #1  // Method 
> java/lang/Object."":()V
>4: aload_0
>5: sipush167
>8: anewarray #3  // class 
> org/apache/lucene/analysis/icu/segmentation/BreakIteratorWrapper
> {code}
> In our case, the ArrayIndexOutOfBoundsException was triggered when we 
> encountered a stray character of the Bhaiksuki script (script code 168) in a 
> chunk of text that we processed.
> CompositeBreakIterator can be made more resilient by changing the type of 
> wordBreakers from an array to a Map and no longer relying on the value of 
> UScript.CODE_LIMIT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7916) CompositeBreakIterator is brittle under ICU4J upgrade.

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117699#comment-16117699
 ] 

ASF subversion and git services commented on LUCENE-7916:
-

Commit 95af49e5882226be52141a26565d8d2f99b76aaf in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=95af49e ]

LUCENE-7916: Remove use of deprecated UScript.CODE_LIMIT in ICUTokenizer


> CompositeBreakIterator is brittle under ICU4J upgrade.
> --
>
> Key: LUCENE-7916
> URL: https://issues.apache.org/jira/browse/LUCENE-7916
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6
>Reporter: Chris Koenig
> Fix For: master (8.0), 7.1
>
> Attachments: LUCENE-7916.patch, LUCENE-7916.patch
>
>
> We use lucene-analyzers-icu version 6.6.0 in our project. Lucene 6.6.0 is 
> built against ICU4J version 56.1, but our use case requires us to use the 
> latest version of ICU4J, 59.1.
> The problem that we have encountered is that 
> CompositeBreakIterator.getBreakIterator(int scriptCode) throws an 
> ArrayIndexOutOfBoundsException for script codes higher than 167. In ICU4J 
> 56.1 the highest possible script code is 166, but in ICU4j 59.1 it is 174.
> Internally, CompositeBreakIterator is creating an array of size 
> UScript.CODE_LIMIT, but the value of CODE_LIMIT from ICU4J 56.1 is being 
> baked into the bytecode by the compiler. So even after overriding the version 
> of the ICU4J dependency to 59.1 in our project, this array will still be size 
> 167, which is too small.
> {code}
> final class CompositeBreakIterator {
>   private final ICUTokenizerConfig config;
>   private final BreakIteratorWrapper wordBreakers[] = new 
> BreakIteratorWrapper[UScript.CODE_LIMIT];
> {code}
> Output of javap run on CompositeBreakIterator.class from 
> lucene-analyzers-icu-6.6.0.jar
> {code}
> Compiled from "CompositeBreakIterator.java"
> final class 
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator {
>   
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator(org.apache.lucene.analysis.icu.segmentation.ICUTokenizerConfig);
> descriptor: 
> (Lorg/apache/lucene/analysis/icu/segmentation/ICUTokenizerConfig;)V
> Code:
>0: aload_0
>1: invokespecial #1  // Method 
> java/lang/Object."":()V
>4: aload_0
>5: sipush167
>8: anewarray #3  // class 
> org/apache/lucene/analysis/icu/segmentation/BreakIteratorWrapper
> {code}
> In our case, the ArrayIndexOutOfBoundsException was triggered when we 
> encountered a stray character of the Bhaiksuki script (script code 168) in a 
> chunk of text that we processed.
> CompositeBreakIterator can be made more resilient by changing the type of 
> wordBreakers from an array to a Map and no longer relying on the value of 
> UScript.CODE_LIMIT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11164) OriginalScoreFeature causes NullPointerException during feature logging with SolrCloud mode.

2017-08-07 Thread Yuki Yano (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117698#comment-16117698
 ] 

Yuki Yano commented on SOLR-11164:
--

[~Jonathan GV]
Thank you for testing the patch! As you say, it will return the calculation of 
the 2nd phase query. However, in my understanding, this query is built from the 
parameter of "q" and thus it should be same as 1st phase query.

Details are as below. In short, both 1st phase request and 2nd phase request 
have same "q" parameter and it will be set to {{ResultContext}}.

1. {{OriginalScoreFeature}} uses {{rb.getQuery()}} for calculating the original 
score, which is given by {{RankQuery#wrap}}.
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/contrib/ltr/src/java/org/apache/solr/ltr/search/LTRQParserPlugin.java#L220
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/handler/component/ResponseBuilder.java#L427

2. As following codes, {{ResultContext}} is built from {{ResponseBuilder}} and 
it use {{rb.getQuery()}} as the query.
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L367
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/response/BasicResultContext.java#L42

3. {{Query}} parsed from "q" is set the query of {{ResponseBuilder}} during 
preparing phase, which is executed in both 1st and 2nd phase.
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L160-L167

4. In distributed process, {{QueryComponent}} sets original parameters when 
builds the 2nd phase request as following codes (i.e., same "q" parameter as 
1st phase).
  
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/6.6.0/solr/core/src/java/org/apache/solr/handler/component/QueryComponent.java#L1287-L1288

> OriginalScoreFeature causes NullPointerException during feature logging with 
> SolrCloud mode.
> 
>
> Key: SOLR-11164
> URL: https://issues.apache.org/jira/browse/SOLR-11164
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.6
>Reporter: Yuki Yano
> Attachments: SOLR-11164.patch
>
>
> In FeatureTransfer, OriginalScoreFeature uses original Query instance 
> preserved in LTRScoringQuery for the evaluation.
> This query is set in RankQuery#wrap during QueryComponent#process.
> With SolrCloud mode, document searches take two steps: finding top-N document 
> ids, and filling documents of found ids.
> In this case, FeatureTransformer works in the second step and tries to 
> extract features with LTRScoringQuery built in QueryComponent#prepare.
> However, because the second step doesn't call QueryComponent#process, the 
> original query of LTRScoringQuery remains null and this causes 
> NullPointerException while evaluating OriginalScoreFeature.
> We can get the original query from ResultContext which is an argument of 
> DocTransformer#setContext, thus this problem can solve by using it if 
> LTRScoringQuery doesn't have correct original query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9985) LukeRequestHandler doesn’t populate docFreq for PointFields

2017-08-07 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117680#comment-16117680
 ] 

Steve Rowe commented on SOLR-9985:
--

I took a look at the patch.  I don't think it's ready to commit yet, because:

* it doesn't have a test
* when a points field is indexed, the indexed field should be used to get doc 
freq instead of performing a points-based search.

> LukeRequestHandler doesn’t populate docFreq for PointFields
> ---
>
> Key: SOLR-9985
> URL: https://issues.apache.org/jira/browse/SOLR-9985
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: numeric-tries-to-points
> Attachments: SOLR-9985.patch
>
>
> Followup task of SOLR-8396



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread Guoqiang Jiang (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117633#comment-16117633
 ] 

Guoqiang Jiang commented on LUCENE-7919:


My pleasure.

> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
> Fix For: 7.0
>
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644)
> at 
> org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)
> at 
> 

[jira] [Commented] (LUCENE-7916) CompositeBreakIterator is brittle under ICU4J upgrade.

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117635#comment-16117635
 ] 

ASF subversion and git services commented on LUCENE-7916:
-

Commit a4db6ce3e681d96fd05f6814818b3270ca527821 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a4db6ce ]

LUCENE-7916: Remove use of deprecated UScript.CODE_LIMIT in ICUTokenizer


> CompositeBreakIterator is brittle under ICU4J upgrade.
> --
>
> Key: LUCENE-7916
> URL: https://issues.apache.org/jira/browse/LUCENE-7916
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6
>Reporter: Chris Koenig
> Attachments: LUCENE-7916.patch, LUCENE-7916.patch
>
>
> We use lucene-analyzers-icu version 6.6.0 in our project. Lucene 6.6.0 is 
> built against ICU4J version 56.1, but our use case requires us to use the 
> latest version of ICU4J, 59.1.
> The problem that we have encountered is that 
> CompositeBreakIterator.getBreakIterator(int scriptCode) throws an 
> ArrayIndexOutOfBoundsException for script codes higher than 167. In ICU4J 
> 56.1 the highest possible script code is 166, but in ICU4j 59.1 it is 174.
> Internally, CompositeBreakIterator is creating an array of size 
> UScript.CODE_LIMIT, but the value of CODE_LIMIT from ICU4J 56.1 is being 
> baked into the bytecode by the compiler. So even after overriding the version 
> of the ICU4J dependency to 59.1 in our project, this array will still be size 
> 167, which is too small.
> {code}
> final class CompositeBreakIterator {
>   private final ICUTokenizerConfig config;
>   private final BreakIteratorWrapper wordBreakers[] = new 
> BreakIteratorWrapper[UScript.CODE_LIMIT];
> {code}
> Output of javap run on CompositeBreakIterator.class from 
> lucene-analyzers-icu-6.6.0.jar
> {code}
> Compiled from "CompositeBreakIterator.java"
> final class 
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator {
>   
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator(org.apache.lucene.analysis.icu.segmentation.ICUTokenizerConfig);
> descriptor: 
> (Lorg/apache/lucene/analysis/icu/segmentation/ICUTokenizerConfig;)V
> Code:
>0: aload_0
>1: invokespecial #1  // Method 
> java/lang/Object."":()V
>4: aload_0
>5: sipush167
>8: anewarray #3  // class 
> org/apache/lucene/analysis/icu/segmentation/BreakIteratorWrapper
> {code}
> In our case, the ArrayIndexOutOfBoundsException was triggered when we 
> encountered a stray character of the Bhaiksuki script (script code 168) in a 
> chunk of text that we processed.
> CompositeBreakIterator can be made more resilient by changing the type of 
> wordBreakers from an array to a Map and no longer relying on the value of 
> UScript.CODE_LIMIT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7916) CompositeBreakIterator is brittle under ICU4J upgrade.

2017-08-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117625#comment-16117625
 ] 

Robert Muir commented on LUCENE-7916:
-

My patch needs a minor correction when committing, we need to replace 
{{UScript.CODE_LIMIT}} with 
{{UCharacter.getIntPropertyMaxValue(UProperty.SCRIPT)+1}}, because the former 
is a limit (one plus the maximum value: 175) and the latter is a maximum value 
(174). Tests do not detect this, but that might only be happenchance due to the 
property values/rules/random string generation for the {{SYMBOLS_EMOJI}} script.

> CompositeBreakIterator is brittle under ICU4J upgrade.
> --
>
> Key: LUCENE-7916
> URL: https://issues.apache.org/jira/browse/LUCENE-7916
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: 6.6
>Reporter: Chris Koenig
> Attachments: LUCENE-7916.patch, LUCENE-7916.patch
>
>
> We use lucene-analyzers-icu version 6.6.0 in our project. Lucene 6.6.0 is 
> built against ICU4J version 56.1, but our use case requires us to use the 
> latest version of ICU4J, 59.1.
> The problem that we have encountered is that 
> CompositeBreakIterator.getBreakIterator(int scriptCode) throws an 
> ArrayIndexOutOfBoundsException for script codes higher than 167. In ICU4J 
> 56.1 the highest possible script code is 166, but in ICU4j 59.1 it is 174.
> Internally, CompositeBreakIterator is creating an array of size 
> UScript.CODE_LIMIT, but the value of CODE_LIMIT from ICU4J 56.1 is being 
> baked into the bytecode by the compiler. So even after overriding the version 
> of the ICU4J dependency to 59.1 in our project, this array will still be size 
> 167, which is too small.
> {code}
> final class CompositeBreakIterator {
>   private final ICUTokenizerConfig config;
>   private final BreakIteratorWrapper wordBreakers[] = new 
> BreakIteratorWrapper[UScript.CODE_LIMIT];
> {code}
> Output of javap run on CompositeBreakIterator.class from 
> lucene-analyzers-icu-6.6.0.jar
> {code}
> Compiled from "CompositeBreakIterator.java"
> final class 
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator {
>   
> org.apache.lucene.analysis.icu.segmentation.CompositeBreakIterator(org.apache.lucene.analysis.icu.segmentation.ICUTokenizerConfig);
> descriptor: 
> (Lorg/apache/lucene/analysis/icu/segmentation/ICUTokenizerConfig;)V
> Code:
>0: aload_0
>1: invokespecial #1  // Method 
> java/lang/Object."":()V
>4: aload_0
>5: sipush167
>8: anewarray #3  // class 
> org/apache/lucene/analysis/icu/segmentation/BreakIteratorWrapper
> {code}
> In our case, the ArrayIndexOutOfBoundsException was triggered when we 
> encountered a stray character of the Bhaiksuki script (script code 168) in a 
> chunk of text that we processed.
> CompositeBreakIterator can be made more resilient by changing the type of 
> wordBreakers from an array to a Map and no longer relying on the value of 
> UScript.CODE_LIMIT.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7920) Make it easier to create ip prefix queries

2017-08-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117588#comment-16117588
 ] 

Robert Muir commented on LUCENE-7920:
-

And i would duplicate the null checks so that exception messages are still 
good, e.g. IAE("InetAddress must not be null") vs IAE("addressBytes must not be 
null"). It would just be an impl detail that one method calls the other one.

> Make it easier to create ip prefix queries
> --
>
> Key: LUCENE-7920
> URL: https://issues.apache.org/jira/browse/LUCENE-7920
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7920.patch
>
>
> {{InetAddress.getByAddress}} automatically transforms ipv6-mapped ipv4 
> addresses to ipv4 addresses. While this is usually desirable, it can make ip 
> prefix query a bit trappy. For instance the following code:
> {code}
> InetAddressPoint.newPrefixQuery("a", InetAddress.getByName(":::0:0"), 96);
> {code}
> throws an IAE complaining that the prefix length is invalid: {{illegal 
> prefixLength '96'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7920) Make it easier to create ip prefix queries

2017-08-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117586#comment-16117586
 ] 

Robert Muir commented on LUCENE-7920:
-

If we decide to do this we should think about the method signature too, because 
{{newPrefixQuery(String,byte[],int)}} is not much different than 
{{newPrefixQuery(String,InetAddress,int)}}, just some overloading with the same 
name. Null is not allowed so its not too bad, but we should still avoid if 
there is an easy way, e.g. different method name or something.

> Make it easier to create ip prefix queries
> --
>
> Key: LUCENE-7920
> URL: https://issues.apache.org/jira/browse/LUCENE-7920
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7920.patch
>
>
> {{InetAddress.getByAddress}} automatically transforms ipv6-mapped ipv4 
> addresses to ipv4 addresses. While this is usually desirable, it can make ip 
> prefix query a bit trappy. For instance the following code:
> {code}
> InetAddressPoint.newPrefixQuery("a", InetAddress.getByName(":::0:0"), 96);
> {code}
> throws an IAE complaining that the prefix length is invalid: {{illegal 
> prefixLength '96'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-08-07 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-10983:
--
Fix Version/s: 6.7

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Fix For: 7.0, 6.7, master (8.0), 7.1
>
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-08-07 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117562#comment-16117562
 ] 

Erick Erickson commented on SOLR-10983:
---

I backported this to 6x (future 6.7) as I really expect there to be a final 
release of the 6x code line and didn't want this to be omitted. No harm if 
there's _not_ a 6.7.

> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10983) Fix DOWNNODE -> queue-work explosion

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10983?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117560#comment-16117560
 ] 

ASF subversion and git services commented on SOLR-10983:


Commit d704796a785aa0d8e455661e519bb2f0c67b7311 in lucene-solr's branch 
refs/heads/branch_6x from Erick
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d704796 ]

SOLR-10983: Fix DOWNNODE -> queue-work explosion, backporting to 6x as per the 
comments in the JIRA


> Fix DOWNNODE -> queue-work explosion
> 
>
> Key: SOLR-10983
> URL: https://issues.apache.org/jira/browse/SOLR-10983
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Scott Blum
>Assignee: Scott Blum
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-10983.patch
>
>
> Every DOWNNODE command enqueues N copies of itself into queue-work, where N 
> is number of collections affected by the DOWNNODE.
> This rarely matters in practice, because queue-work gets immediately dumped-- 
> however, if anything throws an exception (such as ZK bad version), we don't 
> clear queue-work.  Then the next time through the loop we run the expensive 
> DOWNNODE command potentially hundreds of times.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11023) Need SortedNumerics/Points version of EnumField

2017-08-07 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117543#comment-16117543
 ] 

Hoss Man commented on SOLR-11023:
-

thanks for finishing this steve!

> Need SortedNumerics/Points version of EnumField
> ---
>
> Key: SOLR-11023
> URL: https://issues.apache.org/jira/browse/SOLR-11023
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Steve Rowe
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0, master (8.0), 7.1
>
> Attachments: SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch, 
> SOLR-11023.patch, SOLR-11023.patch, SOLR-11023.patch
>
>
> although it's not a subclass of TrieField, EnumField does use 
> "LegacyIntField" to index the int value associated with each of the enum 
> values, in addition to using SortedSetDocValuesField when {{docValues="true" 
> multivalued="true"}}.
> I have no idea if Points would be better/worse then Terms for low cardinality 
> usecases like EnumField, but either way we should think about a new variant 
> of EnumField that doesn't depend on 
> LegacyIntField/LegacyNumericUtils.intToPrefixCoded and uses 
> SortedNumericDocValues.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11190) GraphQuery not working if field has only docValues

2017-08-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117541#comment-16117541
 ] 

Varun Thacker commented on SOLR-11190:
--

I think we could do some field checks upfront in {{GraphQueryParser#parse}} . 

[~kramachand...@commvault.com] what do you think about adding them to the 
patch? 

> GraphQuery not working if field has only docValues
> --
>
> Key: SOLR-11190
> URL: https://issues.apache.org/jira/browse/SOLR-11190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.6
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch
>
>
> Graph traversal is not working if field has only docValues since the 
> construction of leaf or parent node queries uses only TermQuery.
> \\ \\
> {code:xml|title=managed-schema|borderStyle=solid}
> 
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
> id
> 
>  precisionStep="0" positionIncrementGap="0"/>
> 
> {code}
> {code}
> curl -XPOST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/graph/update' --data-binary ' {
>  "add" : { "doc" : { "id" : "1", "name" : "Root1" } },
>  "add" : { "doc" : { "id" : "2", "name" : "Root2" } },
>  "add" : { "doc" : { "id" : "3", "name" : "Root3" } },
>  "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } 
> },
>  "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } 
> },
>  "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } 
> },
>  "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } 
> },
>  "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } 
> },
>  "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 
> Child1" } },
>  "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 
> Child2" } },
>  "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 
> Child1" } },
>  "commit" : {}
> }'
> {code}
> {code}
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid 
> to=id}id:1
> or
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=id 
> to=parentid}id:122
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.0 - Build # 23 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.0/23/

5 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test

Error Message:
The Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
investigating!

Stack Trace:
java.lang.AssertionError: The Monkey ran for over 45 seconds and no jetties 
were stopped - this is worth investigating!
at 
__randomizedtesting.SeedInfo.seed([CFF98CEC2CF9E70:84ABA7146C33F388]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
at 
org.apache.solr.cloud.ChaosMonkeySafeLeaderTest.test(ChaosMonkeySafeLeaderTest.java:133)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 125 - Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/125/

1 tests failed.
FAILED:  
org.apache.solr.cloud.SolrCloudExampleTest.testLoadDocsIntoGettingStartedCollection

Error Message:
KeeperErrorCode = Session expired for /clusterstate.json

Stack Trace:
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = 
Session expired for /clusterstate.json
at 
__randomizedtesting.SeedInfo.seed([41D363886F9A736A:52B051E75EF5CACC]:0)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1212)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:357)
at 
org.apache.solr.common.cloud.SolrZkClient$7.execute(SolrZkClient.java:354)
at 
org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:60)
at 
org.apache.solr.common.cloud.SolrZkClient.getData(SolrZkClient.java:354)
at 
org.apache.solr.common.cloud.ZkStateReader.refreshLegacyClusterState(ZkStateReader.java:541)
at 
org.apache.solr.common.cloud.ZkStateReader.forceUpdateCollection(ZkStateReader.java:309)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:674)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.updateMappingsFromZk(AbstractFullDistribZkTestBase.java:669)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createJettys(AbstractFullDistribZkTestBase.java:464)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.createServers(AbstractFullDistribZkTestBase.java:334)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:991)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117519#comment-16117519
 ] 

ASF subversion and git services commented on SOLR-11126:


Commit a0ad20f5e6caedc50b8a4030ab4ac9e19095e731 in lucene-solr's branch 
refs/heads/master from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a0ad20f ]

SOLR-11126: Remove unused import from HealthCheckHandler


> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11190) GraphQuery not working if field has only docValues

2017-08-07 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117515#comment-16117515
 ] 

Yonik Seeley commented on SOLR-11190:
-

Patch looks fine.

bq.  Also should we be validating the fieldType checks up front?

If it improves something (like error messages) I suppose.  It's not so related 
to this issue, but you can make further improvements at the same time if you 
wish.

> GraphQuery not working if field has only docValues
> --
>
> Key: SOLR-11190
> URL: https://issues.apache.org/jira/browse/SOLR-11190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.6
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch
>
>
> Graph traversal is not working if field has only docValues since the 
> construction of leaf or parent node queries uses only TermQuery.
> \\ \\
> {code:xml|title=managed-schema|borderStyle=solid}
> 
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
> id
> 
>  precisionStep="0" positionIncrementGap="0"/>
> 
> {code}
> {code}
> curl -XPOST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/graph/update' --data-binary ' {
>  "add" : { "doc" : { "id" : "1", "name" : "Root1" } },
>  "add" : { "doc" : { "id" : "2", "name" : "Root2" } },
>  "add" : { "doc" : { "id" : "3", "name" : "Root3" } },
>  "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } 
> },
>  "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } 
> },
>  "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } 
> },
>  "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } 
> },
>  "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } 
> },
>  "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 
> Child1" } },
>  "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 
> Child2" } },
>  "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 
> Child1" } },
>  "commit" : {}
> }'
> {code}
> {code}
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid 
> to=id}id:1
> or
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=id 
> to=parentid}id:122
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-7919.

   Resolution: Fixed
Fix Version/s: 7.0

Thanks [~18519283579]!

> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
> Fix For: 7.0
>
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:644)
> at 
> org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638)
>   

[jira] [Commented] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117506#comment-16117506
 ] 

ASF subversion and git services commented on LUCENE-7919:
-

Commit a128fcb8444271d73f36744018b5261b3bff0606 in lucene-solr's branch 
refs/heads/branch_7_0 from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a128fcb ]

LUCENE-7919: remove useless notify


> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
> Fix For: 7.0
>
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> 

[jira] [Commented] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117504#comment-16117504
 ] 

ASF subversion and git services commented on LUCENE-7919:
-

Commit fe1b75d99448ebfa668a2bab00a462e8e2ded19b in lucene-solr's branch 
refs/heads/branch_7x from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe1b75d ]

LUCENE-7919: remove useless notify


> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> 

[jira] [Commented] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117501#comment-16117501
 ] 

ASF subversion and git services commented on LUCENE-7919:
-

Commit b531fbc5fd91d5fabf90a552b809727d68fd1c9f in lucene-solr's branch 
refs/heads/master from Mike McCandless
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=b531fbc ]

LUCENE-7919: remove useless notify


> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> 

[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117473#comment-16117473
 ] 

ASF subversion and git services commented on SOLR-11126:


Commit 0dca964a5d9d2d845c9031529630a5455177981b in lucene-solr's branch 
refs/heads/master from [~anshumg]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0dca964 ]

SOLR-11126: Reduce logging to debug, and remove the call to updateLiveNodes on 
every call


> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2017-08-07 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117472#comment-16117472
 ] 

Anshum Gupta commented on SOLR-11126:
-

Thanks for taking a look at this [~shalinmangar].

I've changed the logging to debug, and also removed the {{updateLiveNodes()}} 
call from the code.

In terms of getting this to work for standalone mode too, the PING request 
handler as of now is distributed and is kind of orthogonal from the intent with 
which this was added. I think we should be able to support both off the same 
handler, but I would want to give it more thought instead of just moving the 
code for standalone mode here.

Did you intend to just get the PING handler to work for standalone mode, 
without deprecating ? If so, that's rather simple, but as I mentioned earlier, 
I'd want to give it a little more thought. Feel free to chime in if you think I 
didn't understand you well.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Fix For: master (8.0)
>
> Attachments: SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11190) GraphQuery not working if field has only docValues

2017-08-07 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-11190:
-
Attachment: SOLR-11190.patch

I took Karthik's latest PR and added some docs and an entry to the CHANGES.txt


[~ysee...@gmail.com] Does the patch look good to you? Also should we be 
validating the fieldType checks up front? 

> GraphQuery not working if field has only docValues
> --
>
> Key: SOLR-11190
> URL: https://issues.apache.org/jira/browse/SOLR-11190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.6
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Attachments: SOLR-11190.patch, SOLR-11190.patch, SOLR-11190.patch
>
>
> Graph traversal is not working if field has only docValues since the 
> construction of leaf or parent node queries uses only TermQuery.
> \\ \\
> {code:xml|title=managed-schema|borderStyle=solid}
> 
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
> id
> 
>  precisionStep="0" positionIncrementGap="0"/>
> 
> {code}
> {code}
> curl -XPOST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/graph/update' --data-binary ' {
>  "add" : { "doc" : { "id" : "1", "name" : "Root1" } },
>  "add" : { "doc" : { "id" : "2", "name" : "Root2" } },
>  "add" : { "doc" : { "id" : "3", "name" : "Root3" } },
>  "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } 
> },
>  "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } 
> },
>  "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } 
> },
>  "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } 
> },
>  "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } 
> },
>  "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 
> Child1" } },
>  "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 
> Child2" } },
>  "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 
> Child1" } },
>  "commit" : {}
> }'
> {code}
> {code}
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid 
> to=id}id:1
> or
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=id 
> to=parentid}id:122
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11210) Confusing name for aliases in ZK

2017-08-07 Thread Isabelle Giguere (JIRA)
Isabelle Giguere created SOLR-11210:
---

 Summary: Confusing name for aliases in ZK
 Key: SOLR-11210
 URL: https://issues.apache.org/jira/browse/SOLR-11210
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6
Reporter: Isabelle Giguere
Priority: Minor


There's a confusing discrepancy between the aliases information stored in 
Zookeeper and the information returned by LISTALIASES.

http://localhost:8983/solr/admin/collections?action=CREATEALIAS=alias1=collection0,collection1

http://localhost:8983/solr/admin/collections?action=LISTALIASES=json
{"responseHeader":{"status":0,"QTime":0},"aliases":{"all":"alias1":"collection0,collection1"}}

zkCLI -zkHost localhost:2181/solr -cmd getfile /aliases.json 
/aliases_ZK_output.json
{"collection":{
"alias1":"collection0,collection1"}}

The information stored in ZK looks like a NamedList named "collection", which 
doesn't make any sense.  It should be named "aliases".

org.apache.solr.handler.admin.CollectionsHandler.CollectionOperation.LISTALIASES_OP
 adds the value of the ZK response to a NamedList called "aliases", so it 
doesn't show outside ZK.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-07 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117388#comment-16117388
 ] 

Hrishikesh Gadre commented on SOLR-11209:
-

[~risdenk] Thanks for pointing out. Submitting a patch for review shortly.

> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-07 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117375#comment-16117375
 ] 

Kevin Risden commented on SOLR-11209:
-

Looks like some work was done for something similar to this in SOLR-8040

> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117341#comment-16117341
 ] 

ASF subversion and git services commented on SOLR-11177:


Commit c23bf29bb0a9f0ef8cd525584ec366fd0c108487 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c23bf29 ]

SOLR-11177: CoreContainer.load needs to send lazily loaded core descriptors to 
the proper list rather than send them all to the transient lists.

(cherry picked from commit bf168ad)


> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117332#comment-16117332
 ] 

ASF subversion and git services commented on SOLR-11177:


Commit 34e54401fa4c72e7e4d634a8d037bb9757c119bd in lucene-solr's branch 
refs/heads/branch_7x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34e5440 ]

SOLR-11177: CoreContainer.load needs to send lazily loaded core descriptors to 
the proper list rather than send them all to the transient lists.

(cherry picked from commit bf168ad)


> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7920) Make it easier to create ip prefix queries

2017-08-07 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117325#comment-16117325
 ] 

Robert Muir commented on LUCENE-7920:
-

I'm concerned this is really the right thing to do: because these are in fact 
ipv4 addresses (just with a different representation).

the spirit of the RFC is kind of against it here: 
https://www.ietf.org/rfc/rfc4038.txt

{quote}
   However, IPv6 applications must not be required to distinguish
   "normal" and "NAT-PT translated" addresses (or any other kind of
   special addresses, including the IPv4-mapped IPv6 addresses): This
   would be completely impractical, and if the distinction must be made,
   it must be done elsewhere (e.g., kernel, system libraries).
{quote}

Also, taking raw byte[] here looks very error prone. at the very least it would 
need checks that the byte[] is of the correct length (32 or 128 bits only), etc 
etc.


> Make it easier to create ip prefix queries
> --
>
> Key: LUCENE-7920
> URL: https://issues.apache.org/jira/browse/LUCENE-7920
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7920.patch
>
>
> {{InetAddress.getByAddress}} automatically transforms ipv6-mapped ipv4 
> addresses to ipv4 addresses. While this is usually desirable, it can make ip 
> prefix query a bit trappy. For instance the following code:
> {code}
> InetAddressPoint.newPrefixQuery("a", InetAddress.getByName(":::0:0"), 96);
> {code}
> throws an IAE complaining that the prefix length is invalid: {{illegal 
> prefixLength '96'. Must be 0-32 for IPv4 ranges, 0-128 for IPv6 ranges}}.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117321#comment-16117321
 ] 

ASF subversion and git services commented on SOLR-11177:


Commit bf168ad37e4326be28950ede8f958b6c3f1330fa in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bf168ad ]

SOLR-11177: CoreContainer.load needs to send lazily loaded core descriptors to 
the proper list rather than send them all to the transient lists.


> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11177) CoreContainer.load needs to send lazily loaded core descriptors to the proper list rather than send them all to the transient lists.

2017-08-07 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11177:
--
Attachment: SOLR-11177.patch

Patch, quite trivial.

> CoreContainer.load needs to send lazily loaded core descriptors to the proper 
> list rather than send them all to the transient lists.
> 
>
> Key: SOLR-11177
> URL: https://issues.apache.org/jira/browse/SOLR-11177
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
> Attachments: SOLR-11177.patch
>
>
> I suspect this is a minor issue (at least nobody has reported it) but I'm 
> trying to put a bow around transient core handling so I want to examine this 
> more closely.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-07 Thread Hrishikesh Gadre (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hrishikesh Gadre updated SOLR-11209:

Description: We have not upgraded HttpClient version for long time (since 
SOLR-6865 was committed). It may be a good idea to upgrade to the latest stable 
version (which is 4.5.3).  (was: We have upgraded HttpClient version for long 
time (since SOLR-6865 was committed). It may be a good idea to upgrade to the 
latest stable version (which is 4.5.3).)

> Upgrade HttpClient to 4.5.3
> ---
>
> Key: SOLR-11209
> URL: https://issues.apache.org/jira/browse/SOLR-11209
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hrishikesh Gadre
>Priority: Minor
>
> We have not upgraded HttpClient version for long time (since SOLR-6865 was 
> committed). It may be a good idea to upgrade to the latest stable version 
> (which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11209) Upgrade HttpClient to 4.5.3

2017-08-07 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-11209:
---

 Summary: Upgrade HttpClient to 4.5.3
 Key: SOLR-11209
 URL: https://issues.apache.org/jira/browse/SOLR-11209
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Hrishikesh Gadre
Priority: Minor


We have upgraded HttpClient version for long time (since SOLR-6865 was 
committed). It may be a good idea to upgrade to the latest stable version 
(which is 4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7919) excessive use of notifyAll

2017-08-07 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7919?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117244#comment-16117244
 ] 

Michael McCandless commented on LUCENE-7919:


Oh indeed, it can be removed!

We used to have threads {{.wait()}} in the past, but we don't do that anymore 
except in the aborting case and we already have a {{.notifyAll}} for that.

I'll remove it; thanks [~18519283579].

> excessive use of notifyAll
> --
>
> Key: LUCENE-7919
> URL: https://issues.apache.org/jira/browse/LUCENE-7919
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Affects Versions: 6.6
>Reporter: Guoqiang Jiang
>
> I am using Elasticsearch and have a write heavy scene. When tuning with 
> jstack, I found a significant proportion of thread stacks similar to the 
> followings:
> {code:java}
> "elasticsearch[test][bulk][T#23]" #126 daemon prio=5 os_prio=0 
> tid=0x7f68f804 nid=0x6b1 runnable [0x7f6918ce9000]
>java.lang.Thread.State: RUNNABLE
> at java.lang.Object.notifyAll(Native Method)
> at 
> org.apache.lucene.index.DocumentsWriterPerThreadPool.release(DocumentsWriterPerThreadPool.java:213)
> - locked <0xea02b6d0> (a 
> org.apache.lucene.index.DocumentsWriterPerThreadPool)
> at 
> org.apache.lucene.index.DocumentsWriter.updateDocument(DocumentsWriter.java:496)
> at 
> org.apache.lucene.index.IndexWriter.updateDocument(IndexWriter.java:1571)
> at 
> org.apache.lucene.index.IndexWriter.addDocument(IndexWriter.java:1316)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:663)
> at 
> org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:607)
> at 
> org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:505)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:556)
> at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:545)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeIndexRequestOnPrimary(TransportShardBulkAction.java:484)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:143)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:113)
> at 
> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:69)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:939)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryShardReference.perform(TransportReplicationAction.java:908)
> at 
> org.elasticsearch.action.support.replication.ReplicationOperation.execute(ReplicationOperation.java:113)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:322)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.onResponse(TransportReplicationAction.java:264)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:888)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$1.onResponse(TransportReplicationAction.java:885)
> at 
> org.elasticsearch.index.shard.IndexShardOperationsLock.acquire(IndexShardOperationsLock.java:147)
> at 
> org.elasticsearch.index.shard.IndexShard.acquirePrimaryOperationLock(IndexShard.java:1657)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.acquirePrimaryShardReference(TransportReplicationAction.java:897)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction.access$400(TransportReplicationAction.java:93)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$AsyncPrimaryAction.doRun(TransportReplicationAction.java:281)
> at 
> org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:260)
> at 
> org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:252)
> at 
> org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:69)
> at 
> 

[jira] [Commented] (SOLR-11164) OriginalScoreFeature causes NullPointerException during feature logging with SolrCloud mode.

2017-08-07 Thread Jonathan Gonzalez (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117235#comment-16117235
 ] 

Jonathan Gonzalez commented on SOLR-11164:
--

I have tested the provided patch, despite it is indeed returning a score, it is 
not the score from the original query.  Having said that, it will return the 
calculation of the 2nd phase query, while doing the re-ranking applying the 
model, rather than the original query.

> OriginalScoreFeature causes NullPointerException during feature logging with 
> SolrCloud mode.
> 
>
> Key: SOLR-11164
> URL: https://issues.apache.org/jira/browse/SOLR-11164
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - LTR
>Affects Versions: 6.6
>Reporter: Yuki Yano
> Attachments: SOLR-11164.patch
>
>
> In FeatureTransfer, OriginalScoreFeature uses original Query instance 
> preserved in LTRScoringQuery for the evaluation.
> This query is set in RankQuery#wrap during QueryComponent#process.
> With SolrCloud mode, document searches take two steps: finding top-N document 
> ids, and filling documents of found ids.
> In this case, FeatureTransformer works in the second step and tries to 
> extract features with LTRScoringQuery built in QueryComponent#prepare.
> However, because the second step doesn't call QueryComponent#process, the 
> original query of LTRScoringQuery remains null and this causes 
> NullPointerException while evaluating OriginalScoreFeature.
> We can get the original query from ResultContext which is an argument of 
> DocTransformer#setContext, thus this problem can solve by using it if 
> LTRScoringQuery doesn't have correct original query.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10845) GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only fields?)

2017-08-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117219#comment-16117219
 ] 

Varun Thacker commented on SOLR-10845:
--

+1

> GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only 
> fields?)
> --
>
> Key: SOLR-10845
> URL: https://issues.apache.org/jira/browse/SOLR-10845
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10845.patch
>
>
> GraphTermsQParserPlugin (aka {{graphTerms}}) doesn't work if the {{f}} field 
> being used to build the graph is "Points" based (because the field won't have 
> any {{Terms}})
> GraphTermsQParserPlugin should either be enhanced to work correctly with 
> Points based fields, or should give a clear error if the {{isPointsField()}} 
> returns true for the field type being used.  At present, it silently matches 
> no documents.
> (Note: It appears at first glance that the same basic problem probably exists 
> for any trie/string field which is {{docValues="true" indexed="false}} ?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10483) Support for IntPointField field types to Parallel SQL

2017-08-07 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10483?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-10483:

Summary: Support for IntPointField field types to Parallel SQL  (was: 
Support for IntPointField field types to Prallel SQL)

> Support for IntPointField field types to Parallel SQL
> -
>
> Key: SOLR-10483
> URL: https://issues.apache.org/jira/browse/SOLR-10483
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Reporter: Michael Suzuki
> Attachments: SOLR-10483.patch
>
>
> Currently the SolrJDBC is unable to handle fields of type Boolean.
> When we query the example techproducts
> {code}
> SELECT popularity FROM techproducts limit 10
> {code}
> We get the following error: cannot be cast to java.lang.String.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10845) GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only fields?)

2017-08-07 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117206#comment-16117206
 ] 

Yonik Seeley commented on SOLR-10845:
-

Yeah, some of these things never worked for non-indexed fields, so adding 
support for docValues only should be a non-blocking new feature and not a 
regression.

> GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only 
> fields?)
> --
>
> Key: SOLR-10845
> URL: https://issues.apache.org/jira/browse/SOLR-10845
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10845.patch
>
>
> GraphTermsQParserPlugin (aka {{graphTerms}}) doesn't work if the {{f}} field 
> being used to build the graph is "Points" based (because the field won't have 
> any {{Terms}})
> GraphTermsQParserPlugin should either be enhanced to work correctly with 
> Points based fields, or should give a clear error if the {{isPointsField()}} 
> returns true for the field type being used.  At present, it silently matches 
> no documents.
> (Note: It appears at first glance that the same basic problem probably exists 
> for any trie/string field which is {{docValues="true" indexed="false}} ?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 831 - Failure

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/831/

No tests ran.

Build Log:
[...truncated 25698 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 215 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (25.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-8.0.0-src.tgz...
   [smoker] 29.0 MB in 0.02 sec (1258.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.tgz...
   [smoker] 68.9 MB in 0.06 sec (1239.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-8.0.0.zip...
   [smoker] 79.2 MB in 0.06 sec (1242.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6134 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6134 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-8.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 213 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (283.9 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-8.0.0-src.tgz...
   [smoker] 49.8 MB in 0.05 sec (969.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.tgz...
   [smoker] 142.4 MB in 0.13 sec (1113.1 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-8.0.0.zip...
   [smoker] 143.4 MB in 0.13 sec (1125.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-8.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-8.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8
   [smoker] Creating Solr home directory 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/build/smokeTestRelease/tmp/unpack/solr-8.0.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] "bin/solr" start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]   [|]   [/]   [-]   
[\]  
   [smoker] 

[jira] [Commented] (SOLR-10845) GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only fields?)

2017-08-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117153#comment-16117153
 ] 

Varun Thacker commented on SOLR-10845:
--

bq. (Note: It appears at first glance that the same basic problem probably 
exists for any trie/string field which is docValues="true" indexed="false ?)


Karthik filed SOLR-11190 for this

> GraphTermsQParserPlugin doesn't work with Point fields (or DocValues only 
> fields?)
> --
>
> Key: SOLR-10845
> URL: https://issues.apache.org/jira/browse/SOLR-10845
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10845.patch
>
>
> GraphTermsQParserPlugin (aka {{graphTerms}}) doesn't work if the {{f}} field 
> being used to build the graph is "Points" based (because the field won't have 
> any {{Terms}})
> GraphTermsQParserPlugin should either be enhanced to work correctly with 
> Points based fields, or should give a clear error if the {{isPointsField()}} 
> returns true for the field type being used.  At present, it silently matches 
> no documents.
> (Note: It appears at first glance that the same basic problem probably exists 
> for any trie/string field which is {{docValues="true" indexed="false}} ?)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2017-08-07 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117144#comment-16117144
 ] 

Amrit Sarkar commented on SOLR-11200:
-

The above patch works just fine, the logging isn't obvious which created 
significant confusion ::
{code}
2017-08-07 17:18:18.203 INFO  (Lucene Merge Thread #1) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #1]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
merge thread Lucene Merge Thread #0 estSize=3.2 MB (written=2.0 MB) 
runTime=156.3s (stopped=0.0s, paused=0.0s) rate=unlimited
  leave running at Infinity MB/sec
{code}
Even if the targetMBPersec=20, the merges are happening at {{rate=unlimited}}, 
maximum possible disk write speed.
{code}
merge thread Lucene Merge Thread #0 estSize=29.4 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #1 estSize=77.6 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #2 estSize=86.6 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #1 estSize=77.6 MB (written=76.1 MB) 
runTime=10.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #2 estSize=86.6 MB (written=1.0 MB) 
runTime=-0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #3 estSize=133.9 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #3 estSize=133.9 MB (written=132.3 MB) 
runTime=12.8s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #4 estSize=71.9 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #4 estSize=71.9 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #5 estSize=82.0 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #6 estSize=92.5 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #7 estSize=128.2 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #7 estSize=128.2 MB (written=117.2 MB) 
runTime=12.2s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #8 estSize=66.7 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #8 estSize=66.7 MB (written=21.1 MB) 
runTime=0.8s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #9 estSize=206.2 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #8 estSize=66.7 MB (written=65.1 MB) 
runTime=9.2s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #9 estSize=206.2 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #9 estSize=206.2 MB (written=191.5 MB) 
runTime=15.3s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #10 estSize=146.7 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #10 estSize=146.7 MB (written=47.3 MB) 
runTime=1.9s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #11 estSize=280.9 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #10 estSize=146.7 MB (written=143.3 MB) 
runTime=17.2s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #11 estSize=280.9 MB (written=1.0 MB) 
runTime=0.3s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #11 estSize=280.9 MB (written=143.3 MB) 
runTime=12.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #12 estSize=100.8 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #11 estSize=280.9 MB (written=208.3 MB) 
runTime=24.4s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #11 estSize=280.9 MB (written=235.3 MB) 
runTime=28.2s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #13 estSize=193.6 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #13 estSize=193.6 MB (written=79.3 MB) 
runTime=4.7s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #13 estSize=193.6 MB (written=155.4 MB) 
runTime=16.1s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread Lucene Merge Thread #14 estSize=35.5 MB (written=0.0 MB) 
runTime=0.0s (stopped=0.0s, paused=0.0s) rate=unlimited
merge thread 

[jira] [Commented] (SOLR-11190) GraphQuery not working if field has only docValues

2017-08-07 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117095#comment-16117095
 ] 

Varun Thacker commented on SOLR-11190:
--

Hi Karthik,

The patch doesn't apply cleanly after today's SOLR-10939 commit. Can you please 
update the patch

> GraphQuery not working if field has only docValues
> --
>
> Key: SOLR-11190
> URL: https://issues.apache.org/jira/browse/SOLR-11190
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 6.6
>Reporter: Karthik Ramachandran
>Assignee: Karthik Ramachandran
> Attachments: SOLR-11190.patch, SOLR-11190.patch
>
>
> Graph traversal is not working if field has only docValues since the 
> construction of leaf or parent node queries uses only TermQuery.
> \\ \\
> {code:xml|title=managed-schema|borderStyle=solid}
> 
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
>  docValues="true" />
> id
> 
>  precisionStep="0" positionIncrementGap="0"/>
> 
> {code}
> {code}
> curl -XPOST -H 'Content-Type: application/json' 
> 'http://localhost:8983/solr/graph/update' --data-binary ' {
>  "add" : { "doc" : { "id" : "1", "name" : "Root1" } },
>  "add" : { "doc" : { "id" : "2", "name" : "Root2" } },
>  "add" : { "doc" : { "id" : "3", "name" : "Root3" } },
>  "add" : { "doc" : { "id" : "11", "parentid" : "1", "name" : "Root1 Child1" } 
> },
>  "add" : { "doc" : { "id" : "12", "parentid" : "1", "name" : "Root1 Child2" } 
> },
>  "add" : { "doc" : { "id" : "13", "parentid" : "1", "name" : "Root1 Child3" } 
> },
>  "add" : { "doc" : { "id" : "21", "parentid" : "2", "name" : "Root2 Child1" } 
> },
>  "add" : { "doc" : { "id" : "22", "parentid" : "2", "name" : "Root2 Child2" } 
> },
>  "add" : { "doc" : { "id" : "121", "parentid" : "12", "name" : "Root12 
> Child1" } },
>  "add" : { "doc" : { "id" : "122", "parentid" : "12", "name" : "Root12 
> Child2" } },
>  "add" : { "doc" : { "id" : "131", "parentid" : "13", "name" : "Root13 
> Child1" } },
>  "commit" : {}
> }'
> {code}
> {code}
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=parentid 
> to=id}id:1
> or
> http://localhost:8983/solr/graph/select?q=*:*={!graph from=id 
> to=parentid}id:122
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11208) Usage SynchronousQueue in Executors prevent large scale operations

2017-08-07 Thread JIRA
Björn Häuser created SOLR-11208:
---

 Summary: Usage SynchronousQueue in Executors prevent large scale 
operations
 Key: SOLR-11208
 URL: https://issues.apache.org/jira/browse/SOLR-11208
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.6
Reporter: Björn Häuser


I am not sure where to start with this one.

I tried to post this already on the mailing list: 
https://mail-archives.apache.org/mod_mbox/lucene-solr-user/201708.mbox/%3c48c49426-33a2-4d79-ae26-a4515b8f8...@gmail.com%3e

In short: the usage of a SynchronousQueue as the workQeue prevents more tasks 
than max threads.

For example, taken from OverseerCollectionMessageHandler:


{code:java}
  ExecutorService tpe = new ExecutorUtil.MDCAwareThreadPoolExecutor(5, 10, 0L, 
TimeUnit.MILLISECONDS,
  new SynchronousQueue<>(),
  new 
DefaultSolrThreadFactory("OverseerCollectionMessageHandlerThreadFactory"));
{code}

This Executor is used when doing a REPLACENODE (= ADDREPLICA) command. When the 
node has more than 10 collections this will fail with the mentioned 
java.util.concurrent.RejectedExecutionException.

I am also not sure how to fix this. Just replacing the queue with a different 
implementation feels wrong to me or could cause unwanted side behaviour.

Thanks




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10814) Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos authentication

2017-08-07 Thread Hrishikesh Gadre (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117059#comment-16117059
 ] 

Hrishikesh Gadre commented on SOLR-10814:
-

bq. What I meant was, all existing code will continue using getPrincipal(). And 
for anyone writing new authorization plugin, they can use either of the two 
methods. Those who want to keep to play it safe, can use getShortName() and not 
worry about the underlying authentication mode. And those who want to do 
additional processing then can use getPrincipal().

[~bosco] Thanks for your feedback. I think it make sense. I would prefer to use 
short userName for Sentry plugin (instead of requiring some special 
configuration from user).

[~noble.paul] it looks like Apache Ranger and Sentry plugins would not need 
special flag if the short username is exposed via AuthorizationContext. But as 
you said RuleBasedAuthorizationPlugin (and other third-party implementations) 
may benefit from a global flag. After thinking about it, I am not sure if we 
can have one solution which would benefit all plugins.

So I suggest following approach,
* Expose short userName via AuthorizationContext. This will allow new plugin 
implementations to work without any special configuration.
* Add a parameter in security.json which can define the result of the 
AuthorizationContext#getPrincipal() API (i.e. a fully qualified principal name 
vs short userName). This will allow RuleBasedAuthorization plugin as well as 
other third party implementations to work without any changes. (Note - user 
will need to set this parameter for that though).

Does that make sense?


> Solr RuleBasedAuthorization config doesn't work seamlessly with kerberos 
> authentication
> ---
>
> Key: SOLR-10814
> URL: https://issues.apache.org/jira/browse/SOLR-10814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.2
>Reporter: Hrishikesh Gadre
>
> Solr allows configuring roles to control user access to the system. This is 
> accomplished through rule-based permission definitions which are assigned to 
> users.
> The authorization framework in Solr passes the information about the request 
> (to be authorized) using an instance of AuthorizationContext class. Currently 
> the only way to extract authenticated user is via getUserPrincipal() method 
> which returns an instance of java.security.Principal class. The 
> RuleBasedAuthorizationPlugin implementation invokes getName() method on the 
> Principal instance to fetch the list of associated roles.
> https://github.com/apache/lucene-solr/blob/2271e73e763b17f971731f6f69d6ffe46c40b944/solr/core/src/java/org/apache/solr/security/RuleBasedAuthorizationPlugin.java#L156
> In case of basic authentication mechanism, the principal is the userName. 
> Hence it works fine. But in case of kerberos authentication, the user 
> principal also contains the RELM information e.g. instead of foo, it would 
> return f...@example.com. This means if the user changes the authentication 
> mechanism, he would also need to change the user-role mapping in 
> authorization section to use f...@example.com instead of foo. This is not 
> good from usability perspective.   



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11198) downconfig downloads empty file as folder

2017-08-07 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-11198.
---
Resolution: Fixed

> downconfig downloads empty file as folder
> -
>
> Key: SOLR-11198
> URL: https://issues.apache.org/jira/browse/SOLR-11198
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Windows 7
>Reporter: Isabelle Giguere
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11198.patch, SOLR-11198.patch
>
>
> With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file 
> is empty, it is downloaded as a folder (on Windows, at least).
> A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, 
> however, in ZK.
> Noticed because we keep an empty synonyms.txt file in the Solr config 
> provided with our product, in case a client would want to use it.
> The workaround is simple, since the file allows comments: just add a comment, 
> so it is not empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11198) downconfig downloads empty file as folder

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16117000#comment-16117000
 ] 

ASF subversion and git services commented on SOLR-11198:


Commit 83e3276225691c2c710e5fc89df1a1605a2b4112 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=83e3276 ]

SOLR-11198: fix test failures

(cherry picked from commit 53db72c)


> downconfig downloads empty file as folder
> -
>
> Key: SOLR-11198
> URL: https://issues.apache.org/jira/browse/SOLR-11198
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Windows 7
>Reporter: Isabelle Giguere
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11198.patch, SOLR-11198.patch
>
>
> With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file 
> is empty, it is downloaded as a folder (on Windows, at least).
> A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, 
> however, in ZK.
> Noticed because we keep an empty synonyms.txt file in the Solr config 
> provided with our product, in case a client would want to use it.
> The workaround is simple, since the file allows comments: just add a comment, 
> so it is not empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11198) downconfig downloads empty file as folder

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116999#comment-16116999
 ] 

ASF subversion and git services commented on SOLR-11198:


Commit 2c281457dce8b4a09f5b3c101c92b03d28e3d994 in lucene-solr's branch 
refs/heads/branch_6x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c28145 ]

SOLR-11198: downconfig downloads empty file as folder, test failures possible 
fix and logging

(cherry picked from commit e053e22)

(cherry picked from commit a3c360e)


> downconfig downloads empty file as folder
> -
>
> Key: SOLR-11198
> URL: https://issues.apache.org/jira/browse/SOLR-11198
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Windows 7
>Reporter: Isabelle Giguere
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11198.patch, SOLR-11198.patch
>
>
> With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file 
> is empty, it is downloaded as a folder (on Windows, at least).
> A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, 
> however, in ZK.
> Noticed because we keep an empty synonyms.txt file in the Solr config 
> provided with our product, in case a client would want to use it.
> The workaround is simple, since the file allows comments: just add a comment, 
> so it is not empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-07 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116996#comment-16116996
 ] 

Erick Erickson commented on SOLR-11206:
---

Adding link to, there might be some prior art there and/or we can close them 
when this JIRA gets committed.


> Migrate logic from bin/solr scripts to SolrCLI
> --
>
> Key: SOLR-11206
> URL: https://issues.apache.org/jira/browse/SOLR-11206
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Reporter: Jason Gerlowski
> Fix For: master (8.0)
>
>
> The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic 
> that would be easier to maintain if it was instead written in Java code, for 
> a handful of reasons
> * Any logic in the control scripts is duplicated in two places by definition.
> * Increasing test coverage of this logic would be much easier if it was 
> written in Java.
> * Few developers are conversant in both bash and Windows-shell, making 
> editing difficult.
> Some sections in these scripts make good candidates for migration to Java.  
> This issue should examine any of these that are brought up.  However the 
> biggest and most obvious candidate for migration is the argument parsing, 
> validation, usage/help text, etc. for the commands that don't directly 
> start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", 
> "assert" commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11207) Add OWASP dependency checker to detect security vulnerabilities in third party libraries

2017-08-07 Thread Hrishikesh Gadre (JIRA)
Hrishikesh Gadre created SOLR-11207:
---

 Summary: Add OWASP dependency checker to detect security 
vulnerabilities in third party libraries
 Key: SOLR-11207
 URL: https://issues.apache.org/jira/browse/SOLR-11207
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Affects Versions: 6.0
Reporter: Hrishikesh Gadre


Lucene/Solr project depends on number of third party libraries. Some of those 
libraries contain security vulnerabilities. Upgrading to versions of those 
libraries that have fixes for those vulnerabilities is a simple, critical step 
we can take to improve the security of the system. But for that we need a tool 
which can scan the Lucene/Solr dependencies and look up the security database 
for known vulnerabilities.

I found that [OWASP 
dependency-checker|https://jeremylong.github.io/DependencyCheck/dependency-check-ant/]
 can be used for this purpose. It provides a ant task which we can include in 
the Lucene/Solr build. We also need to figure out how (and when) to invoke this 
dependency-checker. But this can be figured out once we complete the first step 
of integrating this tool with the Lucene/Solr build system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116911#comment-16116911
 ] 

ASF subversion and git services commented on SOLR-10821:


Commit 23005f1ecd741ca2ec645efefbc687049e5347f4 in lucene-solr's branch 
refs/heads/branch_7_0 from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=23005f1 ]

SOLR-10821: backport autoscaling docs for 7x and 7.0


> Write documentation for the autoscaling APIs and policy/preferences syntax 
> for Solr 7.0
> ---
>
> Key: SOLR-10821
> URL: https://issues.apache.org/jira/browse/SOLR-10821
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: autoscaling
> Fix For: 7.0
>
>
> We need to document the following:
> # set-policy
> # set-cluster-preferences
> # set-cluster-policy
> # Autoscaling configuration read API
> # Autoscaling diagnostics API
> # policy and preference rule syntax



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116910#comment-16116910
 ] 

ASF subversion and git services commented on SOLR-10821:


Commit 44176011d98f7092bff4955d001d9acc323b8563 in lucene-solr's branch 
refs/heads/branch_7x from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4417601 ]

SOLR-10821: backport autoscaling docs for 7x and 7.0


> Write documentation for the autoscaling APIs and policy/preferences syntax 
> for Solr 7.0
> ---
>
> Key: SOLR-10821
> URL: https://issues.apache.org/jira/browse/SOLR-10821
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: autoscaling
> Fix For: 7.0
>
>
> We need to document the following:
> # set-policy
> # set-cluster-preferences
> # set-cluster-policy
> # Autoscaling configuration read API
> # Autoscaling diagnostics API
> # policy and preference rule syntax



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: 7.0 Release Update

2017-08-07 Thread Anshum Gupta
Good news!

I don't see any 'blockers' for 7.0 anymore, which means, after giving
Jenkins a couple of days, I'll cut out an RC. I intend to do this on
Wednesday/Thursday, unless a blocker comes up, which I hope shouldn't be
the case.

Anshum

On Tue, Jul 25, 2017 at 4:02 PM Steve Rowe  wrote:

> I worked through the list of issues with the "numeric-tries-to-points”
> label and marked those as 7.0 Blocker that seemed reasonable, on the
> assumption that we should at a minimum give clear error messages for points
> non-compatibility.
>
> If others don’t agree with the Blocker assessments I’ve made, I’m willing
> to discuss on the issues.
>
> I plan on starting to work on the remaining 7.0 blockers now.  I would
> welcome assistance in clearing them up.
>
> Here’s a JIRA query to see just the remaining 7.0 blockers, of which there
> are currently 12:
>
> <
> https://issues.apache.org/jira/issues/?jql=project+in+(SOLR,LUCENE)+AND+fixVersion=7.0+AND+priority=Blocker+AND+resolution=Unresolved
> >
>
> --
> Steve
> www.lucidworks.com
>
> > On Jul 25, 2017, at 2:41 PM, Anshum Gupta 
> wrote:
> >
> > I will *try* to get to it, but can't confirm. If someone else has a
> spare cycle and can take it up before I get to it, please do.
> >
> > -Anshum
> >
> > On Tue, Jul 25, 2017 at 10:44 AM Cassandra Targett <
> casstarg...@gmail.com> wrote:
> > I believe the only remaining blocker to SOLR-10803 (to mark all Trie*
> > fields as deprecated) is SOLR-11023, which Hoss was working on. As he
> > noted last night, he is off for vacation for the next 2 weeks. Is
> > anyone else available to work on it so 7.0 isn't stalled for 2+ more
> > weeks?
> >
> > Now would also be a good time to look over any other bugs with
> > PointFields and make a case if any should be considered blockers for
> > 7.0. I think they all share a label:
> >
> https://issues.apache.org/jira/issues/?jql=status%20%3D%20Open%20AND%20labels%20%3D%20numeric-tries-to-points
> >
> > On Tue, Jul 11, 2017 at 4:59 PM, Chris Hostetter
> >  wrote:
> > >
> > > : So, my overall point is that if A) we agree that we want to deprecate
> > > : Trie* numeric fields, and B) we want to hold up the 7.0 release until
> > > : that's done, it's more than just updating the example schemas if we
> > > : want to ensure a quality app for users. We still need to fix the
> tests
> > > : and also fix bugs that are going to be really painful for users. And
> > > : to get all that done soon, we definitely need some more volunteers.
> > >
> > > I've beefed up the description of SOLR-10807 with tips on how people
> can
> > > help out...
> > >
> > > https://issues.apache.org/jira/browse/SOLR-10807
> > >
> > >
> > >
> > > -Hoss
> > > http://www.lucidworks.com/
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


[jira] [Commented] (SOLR-11198) downconfig downloads empty file as folder

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116888#comment-16116888
 ] 

ASF subversion and git services commented on SOLR-11198:


Commit 20a963cd7185d22a13a3801b8c06a9498cf39b1c in lucene-solr's branch 
refs/heads/branch_7x from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=20a963c ]

SOLR-11198: fix test failures

(cherry picked from commit 53db72c)


> downconfig downloads empty file as folder
> -
>
> Key: SOLR-11198
> URL: https://issues.apache.org/jira/browse/SOLR-11198
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Windows 7
>Reporter: Isabelle Giguere
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11198.patch, SOLR-11198.patch
>
>
> With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file 
> is empty, it is downloaded as a folder (on Windows, at least).
> A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, 
> however, in ZK.
> Noticed because we keep an empty synonyms.txt file in the Solr config 
> provided with our product, in case a client would want to use it.
> The workaround is simple, since the file allows comments: just add a comment, 
> so it is not empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2065 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2065/

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.test

Error Message:
Could not load collection from ZK: collection1

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
collection1
at 
__randomizedtesting.SeedInfo.seed([76383D8827D70FE3:FE6C0252892B621B]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1114)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:647)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:128)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:108)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.waitTillNodesActive(LeaderFailureAfterFreshStartTest.java:208)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.restartNodes(LeaderFailureAfterFreshStartTest.java:173)
at 
org.apache.solr.cloud.LeaderFailureAfterFreshStartTest.test(LeaderFailureAfterFreshStartTest.java:148)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-10821) Write documentation for the autoscaling APIs and policy/preferences syntax for Solr 7.0

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10821?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116868#comment-16116868
 ] 

ASF subversion and git services commented on SOLR-10821:


Commit 80530c14a3e50f78d182859ca69d4519576f9f4b in lucene-solr's branch 
refs/heads/master from [~ctargett]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=80530c1 ]

SOLR-10821: resolve TODOs; copy edits & cleanups; reorder section flow


> Write documentation for the autoscaling APIs and policy/preferences syntax 
> for Solr 7.0
> ---
>
> Key: SOLR-10821
> URL: https://issues.apache.org/jira/browse/SOLR-10821
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>  Labels: autoscaling
> Fix For: 7.0
>
>
> We need to document the following:
> # set-policy
> # set-cluster-preferences
> # set-cluster-policy
> # Autoscaling configuration read API
> # Autoscaling diagnostics API
> # policy and preference rule syntax



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11198) downconfig downloads empty file as folder

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116856#comment-16116856
 ] 

ASF subversion and git services commented on SOLR-11198:


Commit 53db72c5985fd6d0027b6888683973ae764c2f85 in lucene-solr's branch 
refs/heads/master from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=53db72c ]

SOLR-11198: fix test failures


> downconfig downloads empty file as folder
> -
>
> Key: SOLR-11198
> URL: https://issues.apache.org/jira/browse/SOLR-11198
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
> Environment: Windows 7
>Reporter: Isabelle Giguere
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 6.7, master (8.0), 7.1
>
> Attachments: SOLR-11198.patch, SOLR-11198.patch
>
>
> With Solr 6.6.0, when downloading a config from Zookeeper (3.4.10), if a file 
> is empty, it is downloaded as a folder (on Windows, at least).
> A Zookeeper browser (Eclipse: Zookeeper Explorer) shows the file as a file, 
> however, in ZK.
> Noticed because we keep an empty synonyms.txt file in the Solr config 
> provided with our product, in case a client would want to use it.
> The workaround is simple, since the file allows comments: just add a comment, 
> so it is not empty.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: bin/solr arg parsing duplication

2017-08-07 Thread Jason Gerlowski
Thanks for the background guys.

To echo/further what Ishan and Erick said above, my own ignorance of
Windows scripting is what initially brought this to mind.  I've had a
handful of JIRA's grind to a halt because of this recently.

I've created SOLR-11206
(https://issues.apache.org/jira/browse/SOLR-11206) for this.  I've
tried to sum up the discussion here in that JIRA description; please
add any corrections if I've misinterpreted anything.  I hope to do
more digging on this in the next day or so.

On Mon, Aug 7, 2017 at 11:52 AM, Erick Erickson  wrote:
> Ditto the pain of working both with the *nix script and the Windows
> scripts. I don't have ready access to Windows machines either so have
> to rely on the kindness of people who do when I need to modify the
> scripts. I think it was one of those things that started out with a
> simple script and each addition was easier to add to the scripts than
> move everything to Java. Until it became a monster.
>
> One point of clarification: I _think_ Anshum meant moving the command
> parsing and all that rot out of the scripts, not the ability to invoke
> the commands themselves.
>
> So if you have the time/energy to take it on, please do create a JIRA...
>
> Best,
> Erick
>
> On Mon, Aug 7, 2017 at 8:40 AM, Ishan Chattopadhyaya
>  wrote:
>> There's https://issues.apache.org/jira/browse/SOLR-7871 which has some
>> relevant discussion in these painpoints.
>> Frankly, working with solr.cmd has been one of the toughest things I've had
>> to deal with in last few months (thanks to my inability to work with Windows
>> script).
>>
>> On Mon, Aug 7, 2017 at 10:19 AM, Anshum Gupta 
>> wrote:
>>>
>>> Hi Jason,
>>>
>>> The history behind the scripts is that they were simpler, and were done to
>>> make things easier for end users. Not sure if you have worked with the
>>> 'bootstrap' part of the command that predated these scripts, but the
>>> intention was to move away from those.
>>>
>>> There was an intention to move the code that can be moved to Java, and do
>>> the heavy lifting there, considering that would also mean reduplication of
>>> code between the *nix, and the windows scripts but due to lack of bandwidth,
>>> that was never done.
>>>
>>> It'd be great to get a patch on the same and have this move out of the bin
>>> scripts altogether. Feel free to create a JIRA and start working on it.
>>>
>>> In case someone else more to add, please do.
>>>
>>> Anshum
>>>
>>> On Sun, Aug 6, 2017 at 7:43 PM Jason Gerlowski 
>>> wrote:

 I noticed recently that arg validation/parsing/help-text for the
 "create", "delete", "auth", "zk", etc. commands makes up a huge chunk
 of the (bin/solr) scripts.  (Some 600 lines by a quick count!)

 This is a shame, since that logic is duplicated across two
 platform-specific scripts.

 I'm not familiar with the history of these scripts; is there a reason
 this logic lives here?  I know that some args must be examined before
 we enter Java-land ("--verbose", JVM args, come to mind).  But is
 there a reason the other arguments are parsed/examined there as well?

 If there's not, moving that logic to Java would gain us a few things:

 - removes duplication
 - makes test-writing for this logic possible
 - Java-logic is more accessible/readable to some than bash/Windows-shell.

 Is there anything I'm missing about this logic living in the bin
 scripts?  I'm happy to create a JIRA and do the leg-work for the
 change if this is something we're interested in.  Just wanted to ask
 around before starting, due to my lack of background.

 Thanks for clarification, if anyone has any to offer.

 Best,

 Jason

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org

>>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11206) Migrate logic from bin/solr scripts to SolrCLI

2017-08-07 Thread Jason Gerlowski (JIRA)
Jason Gerlowski created SOLR-11206:
--

 Summary: Migrate logic from bin/solr scripts to SolrCLI
 Key: SOLR-11206
 URL: https://issues.apache.org/jira/browse/SOLR-11206
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Reporter: Jason Gerlowski
 Fix For: master (8.0)


The {{bin/solr}} and {{bin/solr.cmd}} scripts have taken on a lot of logic that 
would be easier to maintain if it was instead written in Java code, for a 
handful of reasons

* Any logic in the control scripts is duplicated in two places by definition.
* Increasing test coverage of this logic would be much easier if it was written 
in Java.
* Few developers are conversant in both bash and Windows-shell, making editing 
difficult.

Some sections in these scripts make good candidates for migration to Java.  
This issue should examine any of these that are brought up.  However the 
biggest and most obvious candidate for migration is the argument parsing, 
validation, usage/help text, etc. for the commands that don't directly 
start/stop Solr processes (i.e. the "create", "delete", "zk", "auth", "assert" 
commands).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-10939) JoinQParser gives incorrect results with numeric PointsFields

2017-08-07 Thread Yonik Seeley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonik Seeley resolved SOLR-10939.
-
Resolution: Fixed

> JoinQParser gives incorrect results with numeric PointsFields
> -
>
> Key: SOLR-10939
> URL: https://issues.apache.org/jira/browse/SOLR-10939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10939.patch
>
>
> If you try to use the {{\{!join\}}} QParser with numeric points fields, you 
> will get silently incorrect results.
> The underlying root cause seems to be that JoinQParser's JoinQuery assumes 
> every field it's dealing with has indexed terms. (AFAICT it won't even work 
> with {{indexed="false" docValues="true"}} Trie fields)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10939) JoinQParser gives incorrect results with numeric PointsFields

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116824#comment-16116824
 ] 

ASF subversion and git services commented on SOLR-10939:


Commit d057bf2279412204fc5b5af16e3d8856393f0f30 in lucene-solr's branch 
refs/heads/branch_7_0 from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d057bf2 ]

SOLR-10939: add point support to join query


> JoinQParser gives incorrect results with numeric PointsFields
> -
>
> Key: SOLR-10939
> URL: https://issues.apache.org/jira/browse/SOLR-10939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10939.patch
>
>
> If you try to use the {{\{!join\}}} QParser with numeric points fields, you 
> will get silently incorrect results.
> The underlying root cause seems to be that JoinQParser's JoinQuery assumes 
> every field it's dealing with has indexed terms. (AFAICT it won't even work 
> with {{indexed="false" docValues="true"}} Trie fields)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10939) JoinQParser gives incorrect results with numeric PointsFields

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116823#comment-16116823
 ] 

ASF subversion and git services commented on SOLR-10939:


Commit 1f7517d71966a3da4a1dbee202ca01967ebf5434 in lucene-solr's branch 
refs/heads/branch_7x from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=1f7517d ]

SOLR-10939: add point support to join query


> JoinQParser gives incorrect results with numeric PointsFields
> -
>
> Key: SOLR-10939
> URL: https://issues.apache.org/jira/browse/SOLR-10939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10939.patch
>
>
> If you try to use the {{\{!join\}}} QParser with numeric points fields, you 
> will get silently incorrect results.
> The underlying root cause seems to be that JoinQParser's JoinQuery assumes 
> every field it's dealing with has indexed terms. (AFAICT it won't even work 
> with {{indexed="false" docValues="true"}} Trie fields)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10939) JoinQParser gives incorrect results with numeric PointsFields

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116822#comment-16116822
 ] 

ASF subversion and git services commented on SOLR-10939:


Commit bd5c09b1eeb61123f3c799fa6428f2202e6d9356 in lucene-solr's branch 
refs/heads/master from [~yo...@apache.org]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd5c09b ]

SOLR-10939: add point support to join query


> JoinQParser gives incorrect results with numeric PointsFields
> -
>
> Key: SOLR-10939
> URL: https://issues.apache.org/jira/browse/SOLR-10939
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Yonik Seeley
>Priority: Blocker
>  Labels: numeric-tries-to-points
> Fix For: 7.0
>
> Attachments: SOLR-10939.patch
>
>
> If you try to use the {{\{!join\}}} QParser with numeric points fields, you 
> will get silently incorrect results.
> The underlying root cause seems to be that JoinQParser's JoinQuery assumes 
> every field it's dealing with has indexed terms. (AFAICT it won't even work 
> with {{indexed="false" docValues="true"}} Trie fields)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11200) provide a config to enable disable ConcurrentMergeSchedule.doAutoIOThrottle

2017-08-07 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16115348#comment-16115348
 ] 

Amrit Sarkar edited comment on SOLR-11200 at 8/7/17 4:10 PM:
-

Ah! I don't think this will serve our purpose in bulk indexing, logs :: 

{code}
mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, 
ioThrottle=false
2017-08-05 09:14:03.005 INFO  (qtp1205044462-19) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-19]: updateMergeThreads ioThrottle=false 
targetMBPerSec=10240.0 MB/sec
mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, 
ioThrottle=false
2017-08-05 09:15:51.196 INFO  (qtp1205044462-69) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-69]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:15:56.711 INFO  (Lucene Merge Thread #0) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #0]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:10.752 INFO  (qtp1205044462-17) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-17]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:18.229 INFO  (Lucene Merge Thread #1) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #1]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:26.516 INFO  (qtp1205044462-69) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-69]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:35.551 INFO  (Lucene Merge Thread #2) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #2]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:38.580 INFO  (qtp1205044462-18) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-18]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:49.397 INFO  (Lucene Merge Thread #3) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #3]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:56.630 INFO  (qtp1205044462-15) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-15]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
{code}

See the {{targetMBPerSec}} is initialised to {{10gbps}}, but then it falls back 
to default {{20mbps}}, instead of maintaining at 10gbps. Maybe 
{{SolrIndexConfig#buildMergeSchedule}} is not the right place to do it. I will 
look more.


was (Author: sarkaramr...@gmail.com):
Ah! I don't this will serve our purpose in bulk indexing, logs :: 

{code}
mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, 
ioThrottle=false
2017-08-05 09:14:03.005 INFO  (qtp1205044462-19) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-19]: updateMergeThreads ioThrottle=false 
targetMBPerSec=10240.0 MB/sec
mergeScheduler=ConcurrentMergeScheduler: maxThreadCount=5, maxMergeCount=15, 
ioThrottle=false
2017-08-05 09:15:51.196 INFO  (qtp1205044462-69) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-69]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:15:56.711 INFO  (Lucene Merge Thread #0) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #0]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:10.752 INFO  (qtp1205044462-17) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][qtp1205044462-17]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:18.229 INFO  (Lucene Merge Thread #1) [c:collection1 s:shard1 
r:core_node2 x:collection1_shard1_replica_n1] o.a.s.u.LoggingInfoStream 
[MergeScheduler][Lucene Merge Thread #1]: updateMergeThreads ioThrottle=false 
targetMBPerSec=20.0 MB/sec
2017-08-05 09:16:26.516 INFO  (qtp1205044462-69) [c:collection1 s:shard1 
r:core_node2 

[jira] [Commented] (LUCENE-7827) disable "textgrams" when minPrefixChars=0 AnalyzingInfixSuggester

2017-08-07 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116784#comment-16116784
 ] 

Adrien Grand commented on LUCENE-7827:
--

Protected members need javadocs because they can be accessed by users if they 
extend that class. Maybe they could remain private?

> disable "textgrams" when minPrefixChars=0 AnalyzingInfixSuggester 
> --
>
> Key: LUCENE-7827
> URL: https://issues.apache.org/jira/browse/LUCENE-7827
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Mikhail Khludnev
>Priority: Minor
> Attachments: LUCENE-7827.patch, LUCENE-7827.patch, LUCENE-7827.patch, 
> LUCENE-7827.patch
>
>
> The current code allows to set minPrefixChars=0, but it creates an 
> unnecessary {{textgrams}} field, and might bring significant footprint.  
> Bypassing it keeps existing tests green.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: bin/solr arg parsing duplication

2017-08-07 Thread Erick Erickson
Ditto the pain of working both with the *nix script and the Windows
scripts. I don't have ready access to Windows machines either so have
to rely on the kindness of people who do when I need to modify the
scripts. I think it was one of those things that started out with a
simple script and each addition was easier to add to the scripts than
move everything to Java. Until it became a monster.

One point of clarification: I _think_ Anshum meant moving the command
parsing and all that rot out of the scripts, not the ability to invoke
the commands themselves.

So if you have the time/energy to take it on, please do create a JIRA...

Best,
Erick

On Mon, Aug 7, 2017 at 8:40 AM, Ishan Chattopadhyaya
 wrote:
> There's https://issues.apache.org/jira/browse/SOLR-7871 which has some
> relevant discussion in these painpoints.
> Frankly, working with solr.cmd has been one of the toughest things I've had
> to deal with in last few months (thanks to my inability to work with Windows
> script).
>
> On Mon, Aug 7, 2017 at 10:19 AM, Anshum Gupta 
> wrote:
>>
>> Hi Jason,
>>
>> The history behind the scripts is that they were simpler, and were done to
>> make things easier for end users. Not sure if you have worked with the
>> 'bootstrap' part of the command that predated these scripts, but the
>> intention was to move away from those.
>>
>> There was an intention to move the code that can be moved to Java, and do
>> the heavy lifting there, considering that would also mean reduplication of
>> code between the *nix, and the windows scripts but due to lack of bandwidth,
>> that was never done.
>>
>> It'd be great to get a patch on the same and have this move out of the bin
>> scripts altogether. Feel free to create a JIRA and start working on it.
>>
>> In case someone else more to add, please do.
>>
>> Anshum
>>
>> On Sun, Aug 6, 2017 at 7:43 PM Jason Gerlowski 
>> wrote:
>>>
>>> I noticed recently that arg validation/parsing/help-text for the
>>> "create", "delete", "auth", "zk", etc. commands makes up a huge chunk
>>> of the (bin/solr) scripts.  (Some 600 lines by a quick count!)
>>>
>>> This is a shame, since that logic is duplicated across two
>>> platform-specific scripts.
>>>
>>> I'm not familiar with the history of these scripts; is there a reason
>>> this logic lives here?  I know that some args must be examined before
>>> we enter Java-land ("--verbose", JVM args, come to mind).  But is
>>> there a reason the other arguments are parsed/examined there as well?
>>>
>>> If there's not, moving that logic to Java would gain us a few things:
>>>
>>> - removes duplication
>>> - makes test-writing for this logic possible
>>> - Java-logic is more accessible/readable to some than bash/Windows-shell.
>>>
>>> Is there anything I'm missing about this logic living in the bin
>>> scripts?  I'm happy to create a JIRA and do the leg-work for the
>>> change if this is something we're interested in.  Just wanted to ask
>>> around before starting, due to my lack of background.
>>>
>>> Thanks for clarification, if anyone has any to offer.
>>>
>>> Best,
>>>
>>> Jason
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: bin/solr arg parsing duplication

2017-08-07 Thread Ishan Chattopadhyaya
There's https://issues.apache.org/jira/browse/SOLR-7871 which has some
relevant discussion in these painpoints.
Frankly, working with solr.cmd has been one of the toughest things I've had
to deal with in last few months (thanks to my inability to work with
Windows script).

On Mon, Aug 7, 2017 at 10:19 AM, Anshum Gupta 
wrote:

> Hi Jason,
>
> The history behind the scripts is that they were simpler, and were done to
> make things easier for end users. Not sure if you have worked with the
> 'bootstrap' part of the command that predated these scripts, but the
> intention was to move away from those.
>
> There was an intention to move the code that can be moved to Java, and do
> the heavy lifting there, considering that would also mean reduplication of
> code between the *nix, and the windows scripts but due to lack of
> bandwidth, that was never done.
>
> It'd be great to get a patch on the same and have this move out of the bin
> scripts altogether. Feel free to create a JIRA and start working on it.
>
> In case someone else more to add, please do.
>
> Anshum
>
> On Sun, Aug 6, 2017 at 7:43 PM Jason Gerlowski 
> wrote:
>
>> I noticed recently that arg validation/parsing/help-text for the
>> "create", "delete", "auth", "zk", etc. commands makes up a huge chunk
>> of the (bin/solr) scripts.  (Some 600 lines by a quick count!)
>>
>> This is a shame, since that logic is duplicated across two
>> platform-specific scripts.
>>
>> I'm not familiar with the history of these scripts; is there a reason
>> this logic lives here?  I know that some args must be examined before
>> we enter Java-land ("--verbose", JVM args, come to mind).  But is
>> there a reason the other arguments are parsed/examined there as well?
>>
>> If there's not, moving that logic to Java would gain us a few things:
>>
>> - removes duplication
>> - makes test-writing for this logic possible
>> - Java-logic is more accessible/readable to some than bash/Windows-shell.
>>
>> Is there anything I'm missing about this logic living in the bin
>> scripts?  I'm happy to create a JIRA and do the leg-work for the
>> change if this is something we're interested in.  Just wanted to ask
>> around before starting, due to my lack of background.
>>
>> Thanks for clarification, if anyone has any to offer.
>>
>> Best,
>>
>> Jason
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_141) - Build # 20276 - Unstable!

2017-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/20276/
Java: 64bit/jdk1.8.0_141 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.HealthCheckHandlerTest.testHealthCheckHandlerSolrJ

Error Message:
Error from server at http://127.0.0.1:38385/solr: Host Unavailable: Not in live 
nodes as per zk

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:38385/solr: Host Unavailable: Not in live nodes 
as per zk
at 
__randomizedtesting.SeedInfo.seed([1DA5F95F93F78CDC:B756264BE811C4DD]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:627)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:253)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:242)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:178)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:195)
at 
org.apache.solr.cloud.HealthCheckHandlerTest.testHealthCheckHandlerSolrJ(HealthCheckHandlerTest.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Updated] (SOLR-11090) add Replica.getProperty accessor

2017-08-07 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11090:
---
Attachment: SOLR-11090.patch

Patch rebased against latest master branch.

> add Replica.getProperty accessor
> 
>
> Key: SOLR-11090
> URL: https://issues.apache.org/jira/browse/SOLR-11090
> Project: Solr
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: SOLR-11090.patch, SOLR-11090.patch
>
>
> {code}
> ?action=ADDREPLICAPROP&...=propertyName=value
> {code}
> and
> {code}
> ?action=ADDREPLICAPROP&...=property.propertyName=value
> {code}
> are equivalent forms for use of the 
> [ADDREPLICAPROP|https://lucene.apache.org/solr/guide/6_6/collections-api.html]
>  collection API action.
> At present within the code only the generic getStr i.e.
> {code}
> replica.getStr("property.propertyName")
> {code}
> is available to obtain a replica property.
> This ticket proposes a {{replica.getProperty(String)}} accessor which 
> supports both equivalent forms i.e.
> {code}
> replica.getProperty("propertyName")
> {code}
> and
> {code}
> replica.getProperty("property.propertyName")
> {code}
> to be used.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-07 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11183:
--
Priority: Major  (was: Blocker)

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.0
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11205) Make arbitrary metrics values available for policies

2017-08-07 Thread Noble Paul (JIRA)
Noble Paul created SOLR-11205:
-

 Summary: Make arbitrary metrics values available for policies
 Key: SOLR-11205
 URL: https://issues.apache.org/jira/browse/SOLR-11205
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul
Assignee: Noble Paul


Any variable available in the metrics API should be available for policy 
configurations



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-07 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116648#comment-16116648
 ] 

Noble Paul edited comment on SOLR-11183 at 8/7/17 2:28 PM:
---

Yes. the effort required is not really predictable and it may delay {{7.0}} 
release unnecessarily. +1 to remove the blocker


was (Author: noble.paul):
Yes. It effort required is not really predictable and it may delay {{7.0}} 
release unnecessarily. +1 to remove the blocker

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-07 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116648#comment-16116648
 ] 

Noble Paul commented on SOLR-11183:
---

Yes. It effort required is not really predictable and it may delay {{7.0}} 
release unnecessarily. +1 to remove the blocker

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11183) why call the API end point /v2 will there ever be a /v3

2017-08-07 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116643#comment-16116643
 ] 

Cassandra Targett commented on SOLR-11183:
--

I propose making this not a blocker for 7.0.

I don't see any reason why it wouldn't be possible to add an {{/api}} prefix as 
an _alias_ for {{/v2}} at any time in a 7.x release as long as it's 
back-compatible. Assuming that's the case, I don't think we need to hold up 7.0 
for this.

> why call the API end point /v2 will there ever be a /v3
> ---
>
> Key: SOLR-11183
> URL: https://issues.apache.org/jira/browse/SOLR-11183
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.0
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Blocker
> Fix For: 7.0
>
>
> The mail thread
> http://lucene.472066.n3.nabble.com/v2-API-will-there-ever-be-a-v3-td4340901.html
> it makes sense to prefix v2 APIs at {{/api}} intsead of {{/v2}} if we never 
> plan to have a {{/v3}}
> In principle, it makes total sense
> The challenge is that it takes a while to change the code and tests to make 
> to work. Should this be a blocker and should we hold up the release



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11061) Add a spins metric for all directory paths

2017-08-07 Thread Andrzej Bialecki (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-11061:
-
Attachment: SOLR-11061.patch

This patch exposes {{CONTAINER.fs.spins}}, {{CONTAINER.fs.coreRoot.spins}} and 
per-core {{CORE.fs.dataDir.spins}}.

> Add a spins metric for all directory paths
> --
>
> Key: SOLR-11061
> URL: https://issues.apache.org/jira/browse/SOLR-11061
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki 
> Fix For: 7.1
>
> Attachments: SOLR-11061.patch
>
>
> See org.apache.lucene.util.IOUtils.spins. It currently only works for linux 
> and is used by ConcurrentMergeScheduler to set defaults for maxThreadCount 
> and maxMergeCount.
> We should expose this as a metric for solr.data.home and each core's data 
> dir. One thing to note is that the CMS overrides the value detected by the 
> spins method using {{lucene.cms.override_spins}} system property. This 
> property is supposed to be for tests but if it is set then the metrics API 
> should also take that into account.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.0 - Build # 99 - Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.0/99/

1 tests failed.
FAILED:  org.apache.solr.client.solrj.TestLBHttpSolrClient.testSimple

Error Message:
expected:<3> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<3> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([7EFE42DD3996CE77:464D66231E651AA6]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.client.solrj.TestLBHttpSolrClient.testSimple(TestLBHttpSolrClient.java:170)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13815 lines...]
   [junit4] Suite: org.apache.solr.client.solrj.TestLBHttpSolrClient
   [junit4]   2> Creating dataDir: 

[jira] [Commented] (SOLR-10126) PeerSyncReplicationTest is a flakey test.

2017-08-07 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116619#comment-16116619
 ] 

Cao Manh Dat commented on SOLR-10126:
-

I'm looking at this test, I see some failure when REPLICATION.peerSync.errors 
== 1. Here are the case
- leader and replica receive update from 1 to 4
- replica stop
- replica miss updates 5, 6
- replica start recovery
+ replica buffer updates 7, 8
+ replica request versions from leader, 
+ replica get recent versions which is 1,2,3,4,7,8
+ in the same time leader receive update 9, so it will return updates 
from 1 to 9 (for request versions)
+ replica do peersync and request updates 5, 6, 9 from leader
+ replica apply updates 5, 6, 9. Its index does not have update 7, 8 
and maxVersionSpecified for fingerprint is 9, therefore compare fingerprint 
will fail



> PeerSyncReplicationTest is a flakey test.
> -
>
> Key: SOLR-10126
> URL: https://issues.apache.org/jira/browse/SOLR-10126
> Project: Solr
>  Issue Type: Test
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
> Attachments: faillogs.tar.gz
>
>
> Could be related to SOLR-9555, but I will see what else pops up under 
> beasting.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_141) - Build # 218 - Still Unstable!

2017-08-07 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/218/
Java: 32bit/jdk1.8.0_141 -server -XX:+UseG1GC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
1 thread leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=15799, name=jetty-launcher-2262-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=15799, name=jetty-launcher-2262-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)
at __randomizedtesting.SeedInfo.seed([16A73E7496002A9A]:0)


FAILED:  
org.apache.lucene.search.suggest.document.TestSuggestField.testRealisticKeys

Error Message:
input automaton is too large: 1001

Stack Trace:
java.lang.IllegalArgumentException: input automaton is too large: 1001
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1298)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 
org.apache.lucene.util.automaton.Operations.topoSortStatesRecurse(Operations.java:1306)
at 

[JENKINS] Lucene-Solr-Tests-7.x - Build # 123 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/123/

4 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup

Error Message:
 null Live Nodes: [127.0.0.1:47515_solr] Last available state: 
DocCollection(testRepFactor1LeaderStartup//collections/testRepFactor1LeaderStartup/state.json/4)={
   "pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   
"replicas":{"core_node2":{   
"core":"testRepFactor1LeaderStartup_shard1_replica_n1",   
"base_url":"https://127.0.0.1:47515/solr;,   
"node_name":"127.0.0.1:47515_solr",   "state":"active",   
"type":"NRT",   "leader":"true",   "router":{"name":"compositeId"}, 
  "maxShardsPerNode":"1",   "autoAddReplicas":"false",   "nrtReplicas":"1",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: 
null
Live Nodes: [127.0.0.1:47515_solr]
Last available state: 
DocCollection(testRepFactor1LeaderStartup//collections/testRepFactor1LeaderStartup/state.json/4)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{"core_node2":{
  "core":"testRepFactor1LeaderStartup_shard1_replica_n1",
  "base_url":"https://127.0.0.1:47515/solr;,
  "node_name":"127.0.0.1:47515_solr",
  "state":"active",
  "type":"NRT",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([9E43F7D66DBC33B2:496BBA0E7C5C7A5C]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:269)
at 
org.apache.solr.cloud.TestCloudSearcherWarming.testRepFactor1LeaderStartup(TestCloudSearcherWarming.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Resolved] (LUCENE-7655) Speed up geo-distance queries that match most documents

2017-08-07 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-7655.
--
   Resolution: Fixed
Fix Version/s: 7.1
   master (8.0)

Tests passed. Thanks Maciej!

> Speed up geo-distance queries that match most documents
> ---
>
> Key: LUCENE-7655
> URL: https://issues.apache.org/jira/browse/LUCENE-7655
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Fix For: master (8.0), 7.1
>
>
> I think the same optimization that was applied in LUCENE-7641 would also work 
> with geo-distance queries?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7655) Speed up geo-distance queries that match most documents

2017-08-07 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116465#comment-16116465
 ] 

ASF GitHub Bot commented on LUCENE-7655:


Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/226


> Speed up geo-distance queries that match most documents
> ---
>
> Key: LUCENE-7655
> URL: https://issues.apache.org/jira/browse/LUCENE-7655
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I think the same optimization that was applied in LUCENE-7641 would also work 
> with geo-distance queries?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7655) Speed up geo-distance queries that match most documents

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116463#comment-16116463
 ] 

ASF subversion and git services commented on LUCENE-7655:
-

Commit 5fb800f01819a2bfcebf8ba04fb1fd7d28ba6b23 in lucene-solr's branch 
refs/heads/master from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5fb800f ]

LUCENE-7655: Speed up geo-distance queries that match most documents.

Closes #226


> Speed up geo-distance queries that match most documents
> ---
>
> Key: LUCENE-7655
> URL: https://issues.apache.org/jira/browse/LUCENE-7655
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I think the same optimization that was applied in LUCENE-7641 would also work 
> with geo-distance queries?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7655) Speed up geo-distance queries that match most documents

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116462#comment-16116462
 ] 

ASF subversion and git services commented on LUCENE-7655:
-

Commit fdf808475fe7a065bd4ea8b46cfe55129299e2c0 in lucene-solr's branch 
refs/heads/branch_7x from [~jpountz]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fdf8084 ]

LUCENE-7655: Speed up geo-distance queries that match most documents.

Closes #226


> Speed up geo-distance queries that match most documents
> ---
>
> Key: LUCENE-7655
> URL: https://issues.apache.org/jira/browse/LUCENE-7655
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
>
> I think the same optimization that was applied in LUCENE-7641 would also work 
> with geo-distance queries?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #226: LUCENE-7655 Speed up geo-distance queries tha...

2017-08-07 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/226


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Closed] (LUCENE-7918) Give access to members of a composite shape

2017-08-07 Thread Ignacio Vera (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ignacio Vera closed LUCENE-7918.

Lucene Fields:   (was: New)

Thanks for your support! I hope these new features of the library are useful 
(sure for me!).


> Give access to members of a composite shape
> ---
>
> Key: LUCENE-7918
> URL: https://issues.apache.org/jira/browse/LUCENE-7918
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Fix For: 6.6, master (8.0), 7.1
>
> Attachments: LUCENE-7918.patch
>
>
> Hi [~daddywri],
> I hope this is my last point in my wish list. In order to serialize objects I 
> need to access the members of a composite geoshape. This is currently not 
> possible so I was wondering if it is possible to add to more methods to the 
> class GeoCompositeMembershipShape:
> public int size()
> public GeoMembershipShape getShape(int index)
> Thanks,
> Ignacio



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2064 - Still Unstable

2017-08-07 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2064/

10 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
No registered leader was found after waiting for 3ms , collection: 
collection1 slice: shard1

Stack Trace:
org.apache.solr.common.SolrException: No registered leader was found after 
waiting for 3ms , collection: collection1 slice: shard1
at 
__randomizedtesting.SeedInfo.seed([5333974082866834:DB67A89A2C7A05CC]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getLeaderRetry(ZkStateReader.java:757)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:210)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7918) Give access to members of a composite shape

2017-08-07 Thread Karl Wright (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116433#comment-16116433
 ] 

Karl Wright commented on LUCENE-7918:
-

Thanks again for the contribution!
Everything is now committed.

> Give access to members of a composite shape
> ---
>
> Key: LUCENE-7918
> URL: https://issues.apache.org/jira/browse/LUCENE-7918
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Fix For: 6.6, master (8.0), 7.1
>
> Attachments: LUCENE-7918.patch
>
>
> Hi [~daddywri],
> I hope this is my last point in my wish list. In order to serialize objects I 
> need to access the members of a composite geoshape. This is currently not 
> possible so I was wondering if it is possible to add to more methods to the 
> class GeoCompositeMembershipShape:
> public int size()
> public GeoMembershipShape getShape(int index)
> Thanks,
> Ignacio



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7918) Give access to members of a composite shape

2017-08-07 Thread Karl Wright (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karl Wright resolved LUCENE-7918.
-
   Resolution: Fixed
Fix Version/s: 6.6
   7.1
   master (8.0)

> Give access to members of a composite shape
> ---
>
> Key: LUCENE-7918
> URL: https://issues.apache.org/jira/browse/LUCENE-7918
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Fix For: master (8.0), 7.1, 6.6
>
> Attachments: LUCENE-7918.patch
>
>
> Hi [~daddywri],
> I hope this is my last point in my wish list. In order to serialize objects I 
> need to access the members of a composite geoshape. This is currently not 
> possible so I was wondering if it is possible to add to more methods to the 
> class GeoCompositeMembershipShape:
> public int size()
> public GeoMembershipShape getShape(int index)
> Thanks,
> Ignacio



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7918) Give access to members of a composite shape

2017-08-07 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16116431#comment-16116431
 ] 

ASF subversion and git services commented on LUCENE-7918:
-

Commit 7d1c7e757668337ec33bc543c9718320fd6974fe in lucene-solr's branch 
refs/heads/branch_7x from [~kwri...@metacarta.com]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7d1c7e7 ]

LUCENE-7918: Revamp the API for composites so that it's generic and useful for 
many kinds of shapes.  Committed (as was LUCENE-7906) on behalf of Ignacio Vera.


> Give access to members of a composite shape
> ---
>
> Key: LUCENE-7918
> URL: https://issues.apache.org/jira/browse/LUCENE-7918
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/spatial3d
>Reporter: Ignacio Vera
>Assignee: Karl Wright
> Attachments: LUCENE-7918.patch
>
>
> Hi [~daddywri],
> I hope this is my last point in my wish list. In order to serialize objects I 
> need to access the members of a composite geoshape. This is currently not 
> possible so I was wondering if it is possible to add to more methods to the 
> class GeoCompositeMembershipShape:
> public int size()
> public GeoMembershipShape getShape(int index)
> Thanks,
> Ignacio



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >