[JENKINS-EA] Lucene-Solr-BadApples-master-Linux (64bit/jdk-13-ea+12) - Build # 197 - Still Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-BadApples-master-Linux/197/
Java: 64bit/jdk-13-ea+12 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.NestedShardedAtomicUpdateTest.test

Error Message:
Error from server at http://127.0.0.1:9/collection1: non ok status: 500, 
message:Server Error

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:9/collection1: non ok status: 500, 
message:Server Error
at 
__randomizedtesting.SeedInfo.seed([D5FE563B54FBD121:5DAA69E1FA07BCD9]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:579)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:224)
at 
org.apache.solr.BaseDistributedSearchTestCase.add(BaseDistributedSearchTestCase.java:576)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.indexDocAndRandomlyCommit(NestedShardedAtomicUpdateTest.java:221)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.sendWrongRouteParam(NestedShardedAtomicUpdateTest.java:191)
at 
org.apache.solr.cloud.NestedShardedAtomicUpdateTest.test(NestedShardedAtomicUpdateTest.java:55)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-13420) Allow Routed Aliases to use Collection Properties instead of core properties

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826636#comment-16826636
 ] 

Tomás Fernández Löbbe commented on SOLR-13420:
--

commented here: 
https://issues.apache.org/jira/browse/SOLR-13418?focusedCommentId=16826635

> Allow Routed Aliases to use Collection Properties instead of core properties
> 
>
> Key: SOLR-13420
> URL: https://issues.apache.org/jira/browse/SOLR-13420
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13420.patch
>
>
> The current routed alias code is relying on a core property named 
> routedAliasName to detect when the Routed Alias wrapper URP should be applied 
> to Distributed Update Request Processor. 
> {code:java}
> #Written by CorePropertiesLocator
> #Sun Mar 03 06:21:14 UTC 2019
> routedAliasName=testalias21
> numShards=2
> collection.configName=_default
> ... etc...
> {code}
> Core properties are not changeable after the core is created, and they are 
> written to the file system for every core. To support a unit test for 
> SOLR-13419 I need to create some legacy formatted collection names, and 
> arrange them into a TRA, but this is impossible due to the fact that I can't 
> change the core property from the test. There's a TODO dating back to the 
> original TRA implementation in the routed alias code to switch to collection 
> properties instead, so this ticket will address that TOD to support the test 
> required for SOLR-13419.
> Back compatibility with legacy core based TRA's and CRA's will of course be 
> maintained. I also expect that this will facilitate some more nimble handling 
> or routed aliases with future auto-scaling capabilities such as possibly 
> detaching and archiving collections to cheaper, slower machines rather than 
> deleting them. (presently such a collection would still attempt to use the 
> routed alias if it received an update even if it were no longer in the list 
> of collections for the alias)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13418) ZkStateReader.PropsWatcher synchronizes on a string value & doesn't track zk version

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826635#comment-16826635
 ] 

Tomás Fernández Löbbe commented on SOLR-13418:
--

The intended use of Collection properties is similar to Cluster Properties, to 
transmit simple configuration to components of collections (in fact, a common 
pattern we use is to consider cluster properties as default values when a 
collection property is not present). Since not every node hosts every 
collection, Collection Properties are implemented slightly different, using 
watchers (and that’s the reason a simple call to 
{{getCollectionProperties(collection)}} doesn’t cache the result). Components 
can set watchers on collection properties and adapt/reconfigure after changes 
(i.e. suppose you have a component to apply backpressure in queries, you can 
change the amount of QPS that the component will allow, without having to 
reload your collection).

I don’t know much about TRA, but looking at your patch, can’t you make the 
URPFactory register a watcher (by calling {{registerCollectionPropsWatcher}}) 
and cache the properties locally? That’s the intended use of the watchers. 
Then, the URP would just query that map that the factory keeps locally (can be 
passed to the URP on constructor or whatever way you prefer to do that.

> ZkStateReader.PropsWatcher synchronizes on a string value & doesn't track zk 
> version
> 
>
> Key: SOLR-13418
> URL: https://issues.apache.org/jira/browse/SOLR-13418
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 8.0, master (9.0)
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>
> While contemplating adding better caching to collection properties to avoid 
> repeated calls to ZK from code that wishes to consult collection properties, 
> I noticed that the existing PropsWatcher class is synchronizing on a string 
> value for the name of a collection. Synchronizing on strings is bad practice, 
> given that you never know if the string might have been interned, and 
> therefore someone else might also synchronizing on the same object without 
> your knowledge creating contention or even deadlocks. Also this code doesn't 
> seem to be doing anything to check ZK version information, so it seems 
> possible that out of order processing by threads could wind up caching out of 
> date data. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-MacOSX (64bit/jdk-11.0.2) - Build # 103 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-MacOSX/103/
Java: 64bit/jdk-11.0.2 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.SystemCollectionCompatTest.testBackCompat

Error Message:
Error from server at http://127.0.0.1:56077/solr/.system: Error reading input 
String Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/J0

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:56077/solr/.system: Error reading input String 
Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/Users/jenkins/workspace/Lucene-Solr-8.x-MacOSX/solr/build/solr-core/test/J0
at 
__randomizedtesting.SeedInfo.seed([AD2EA5D1AB38A234:DDDB0678CBF00B42]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.cloud.SystemCollectionCompatTest.setupSystemCollection(SystemCollectionCompatTest.java:104)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 

[jira] [Commented] (SOLR-12291) Async prematurely reports completed status that causes severe shard loss

2019-04-25 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826596#comment-16826596
 ] 

Lucene/Solr QA commented on SOLR-12291:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
50s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  4m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  4m 36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  4m 36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  4m 36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 89m  8s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}102m 39s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.AsyncCallRequestStatusResponseTest |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12291 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967061/SOLR-12291.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ef79dd5 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/387/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/387/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/387/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Async prematurely reports completed status that causes severe shard loss
> 
>
> Key: SOLR-12291
> URL: https://issues.apache.org/jira/browse/SOLR-12291
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore, SolrCloud
>Reporter: Varun Thacker
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-12291.patch, SOLR-12291.patch, SOLR-12291.patch, 
> SOLR-12291.patch, SOLR-122911.patch
>
>
> The OverseerCollectionMessageHandler sliceCmd assumes only one replica exists 
> on one node
> When multiple replicas of a slice are on the same node we only track one 
> replica's async request. This happens because the async requestMap's key is 
> "node_name"
> I discovered this when [~alabax] shared some logs of a restore issue, where 
> the second replica got added before the first replica had completed it's 
> restorecore action.
> While looking at the logs I noticed that the overseer never called 
> REQUESTSTATUS for the restorecore action , almost as if it had missed 
> tracking that particular async request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826554#comment-16826554
 ] 

Lucene/Solr QA commented on SOLR-13414:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
45s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  2m 53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  2m 53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  2m 53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 
23s{color} | {color:green} core in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-13414 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12967048/SOLR-13414.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene2-us-west.apache.org 4.4.0-112-generic #135-Ubuntu SMP 
Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / ef79dd5 |
| ant | version: Apache Ant(TM) version 1.9.6 compiled on July 20 2018 |
| Default Java | LTS |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/386/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/386/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, SOLR-13414.patch, 
> before_starting_solr.png, command_prompt.png, luke_out.xml, managed-schema, 
> new_solr-8983-console.log, new_solr.log, solr-8983-console.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> *Summary*
> If the underlying Lucene index has fields defined but no type, SolrSchema 
> fails with NPE. The index most likely has issues and would be better to 
> delete/recreate the index. This ticket adds a null check to prevent the NPE 
> and won't break on a potentially invalid index.
> *Initial Description*
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 

[JENKINS] Lucene-Solr-8.x-Windows (64bit/jdk-9.0.4) - Build # 224 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Windows/224/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Error from server at http://127.0.0.1:53653/solr/authCollection: Error from 
server at null: Expected mime type application/octet-stream but got text/html. 
   Error 401 require 
authentication  HTTP ERROR 401 Problem 
accessing /solr/authCollection_shard2_replica_n2/select. Reason: 
require authenticationhttp://eclipse.org/jetty;>Powered 
by Jetty:// 9.4.14.v20181114

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:53653/solr/authCollection: Error from server at 
null: Expected mime type application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/authCollection_shard2_replica_n2/select. Reason:
require authenticationhttp://eclipse.org/jetty;>Powered by Jetty:// 9.4.14.v20181114




at 
__randomizedtesting.SeedInfo.seed([5C03BE52A565B5F1:E06DC8400136368B]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:290)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-25 Thread Dwane Hall (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826544#comment-16826544
 ] 

Dwane Hall commented on SOLR-12584:
---

Nice work [~sbillet]!

I can confirm this as working in Solr Cloud 7.6 as well.  Nice pickup this has 
made cluster monitoring somewhat challenging for us so this is a very welcome 
discovery! 

There looks to be some slight config changes required to the 
solr-exporter-config.xml from the defaults but authentication is definitely 
working :)

ERROR - 2019-04-26 09:48:23.921; 
org.apache.solr.prometheus.scraper.SolrScraper; 
net.thisptr.jackson.jq.exception.JsonQueryException: null cannot be parsed as a 
number (.cluster.collections | to_entries | .[] | . as $object | $object.key as 
$collection | $object.value.pullReplicas | tonumber as $value | \{("name"): 
"solr_collections_pull_replicas",("type"): "GAUGE",("help"): "See following 
URL: 
https://lucene.apache.org/solr/guide/collections-api.html#clusterstatus;,("label_names"):
 ["collection"],("label_values"): [$collection],("value"): $value}) 

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1318 - Still Failing

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1318/

No tests ran.

Build Log:
[...truncated 23468 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2526 links (2067 relative) to 3355 anchors in 253 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-9.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.


[JENKINS] Lucene-Solr-master-Windows (64bit/jdk-12) - Build # 7910 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7910/
Java: 64bit/jdk-12 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.impl.CloudSolrClientTest

Error Message:
ObjectTracker found 4 object(s) that were not released!!! 
[MockDirectoryWrapper, SolrCore, MockDirectoryWrapper, InternalHttpClient] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.core.SolrCore.initSnapshotMetaDataManager(SolrCore.java:517)  
at org.apache.solr.core.SolrCore.(SolrCore.java:968)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1227)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1137)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.solr.core.SolrCore  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at org.apache.solr.core.SolrCore.(SolrCore.java:1063)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:883)  at 
org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1227)
  at org.apache.solr.core.CoreContainer.create(CoreContainer.java:1137)  at 
org.apache.solr.handler.admin.CoreAdminOperation.lambda$static$0(CoreAdminOperation.java:92)
  at 
org.apache.solr.handler.admin.CoreAdminOperation.execute(CoreAdminOperation.java:360)
  at 
org.apache.solr.handler.admin.CoreAdminHandler$CallInfo.call(CoreAdminHandler.java:396)
  at 
org.apache.solr.handler.admin.CoreAdminHandler.lambda$handleRequestBody$0(CoreAdminHandler.java:188)
  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181)
  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.lucene.store.MockDirectoryWrapper  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:348)
  at 
org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:509)  
at org.apache.solr.handler.IndexFetcher.fetchLatestIndex(IndexFetcher.java:351) 
 at 
org.apache.solr.handler.ReplicationHandler.doFetch(ReplicationHandler.java:422) 
 at 
org.apache.solr.handler.ReplicationHandler.lambda$setupPolling$13(ReplicationHandler.java:1191)
  at 
java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
  at java.base/java.util.concurrent.FutureTask.runAndReset(FutureTask.java:305) 
 at 
java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:305)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
  at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
  at java.base/java.lang.Thread.run(Thread.java:835)  
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException: 
org.apache.http.impl.client.InternalHttpClient  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:42)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:322)
  at 
org.apache.solr.client.solrj.impl.HttpClientUtil.createClient(HttpClientUtil.java:331)
  at 
org.apache.solr.handler.IndexFetcher.createHttpClient(IndexFetcher.java:230)  
at org.apache.solr.handler.IndexFetcher.(IndexFetcher.java:272)  at 

[JENKINS] Lucene-Solr-Tests-8.x - Build # 168 - Still Unstable

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/168/

1 tests failed.
FAILED:  org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth

Error Message:
Expected metric minimums for prefix SECURITY./authentication.: 
{failMissingCredentials=2, authenticated=19, passThrough=9, 
failWrongCredentials=1, requests=31, errors=0}, but got: 
{failMissingCredentials=2, authenticated=17, passThrough=13, totalTime=7270228, 
failWrongCredentials=1, requestTimes=1202, requests=33, errors=0}

Stack Trace:
java.lang.AssertionError: Expected metric minimums for prefix 
SECURITY./authentication.: {failMissingCredentials=2, authenticated=19, 
passThrough=9, failWrongCredentials=1, requests=31, errors=0}, but got: 
{failMissingCredentials=2, authenticated=17, passThrough=13, totalTime=7270228, 
failWrongCredentials=1, requestTimes=1202, requests=33, errors=0}
at 
__randomizedtesting.SeedInfo.seed([670894E27436BC8:BA1EFF5C8310E8B2]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:129)
at 
org.apache.solr.cloud.SolrCloudAuthTestCase.assertAuthMetricsMinimums(SolrCloudAuthTestCase.java:83)
at 
org.apache.solr.security.BasicAuthIntegrationTest.testBasicAuth(BasicAuthIntegrationTest.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 

Re: significant lucene benchmark regression: JDK11?

2019-04-25 Thread Uwe Schindler
Maybe do this temporary, to not have 2 changes at the same time.

Uwe

Am April 25, 2019 9:48:35 PM UTC schrieb Michael McCandless 
:
>Yeah I'm just using the JDK's default in the nightly benchmarks.
>
>Should I override back to the parallel collector?
>
>Mike McCandless
>
>http://blog.mikemccandless.com
>
>
>On Thu, Apr 25, 2019 at 5:44 PM Uwe Schindler  wrote:
>
>> Hi,
>>
>> I am not sure how Mike's benchmarks are setup and if he chooses a
>specific
>> garbage collector.
>>
>> Java 8 defaults to ParallelGC, Java 11 defaults to G1, which may slow
>down
>> up to 10% as it is not optimized for throughput.
>>
>> So to compare you gave to be specific in your GC choices.
>>
>> Uwe
>>
>> Am April 25, 2019 5:57:16 PM UTC schrieb Nicholas Knize
>> >:
>>>
>>> Earlier this week I noticed a significant across the board
>performance
>>> regression on the nightly geo benchmarks
>>> . It appears this
>>> regression can also be seen on other lucene benchmarks
>>> 
>and
>>> appears to correspond to the upgrade to JDK 11.
>>>
>>> Any thoughts?
>>>
>>> Nicholas Knize, Ph.D., GISP
>>> Geospatial Software Guy  |  Elasticsearch
>>> Apache Lucene PMC Member and Committer
>>> nkn...@apache.org
>>>
>>
>> --
>> Uwe Schindler
>> Achterdiek 19, 28357 Bremen
>> https://www.thetaphi.de
>>

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[GitHub] [lucene-solr] noblepaul closed pull request #656: revert making DocValuesTermsCollector & TermsQuery public

2019-04-25 Thread GitBox
noblepaul closed pull request #656: revert making DocValuesTermsCollector & 
TermsQuery public
URL: https://github.com/apache/lucene-solr/pull/656
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul opened a new pull request #656: revert making DocValuesTermsCollector & TermsQuery public

2019-04-25 Thread GitBox
noblepaul opened a new pull request #656: revert making DocValuesTermsCollector 
& TermsQuery public
URL: https://github.com/apache/lucene-solr/pull/656
 
 
   We don't need to make these changes to our local fork


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: significant lucene benchmark regression: JDK11?

2019-04-25 Thread Michael McCandless
Yeah I'm just using the JDK's default in the nightly benchmarks.

Should I override back to the parallel collector?

Mike McCandless

http://blog.mikemccandless.com


On Thu, Apr 25, 2019 at 5:44 PM Uwe Schindler  wrote:

> Hi,
>
> I am not sure how Mike's benchmarks are setup and if he chooses a specific
> garbage collector.
>
> Java 8 defaults to ParallelGC, Java 11 defaults to G1, which may slow down
> up to 10% as it is not optimized for throughput.
>
> So to compare you gave to be specific in your GC choices.
>
> Uwe
>
> Am April 25, 2019 5:57:16 PM UTC schrieb Nicholas Knize  >:
>>
>> Earlier this week I noticed a significant across the board performance
>> regression on the nightly geo benchmarks
>> . It appears this
>> regression can also be seen on other lucene benchmarks
>>  and
>> appears to correspond to the upgrade to JDK 11.
>>
>> Any thoughts?
>>
>> Nicholas Knize, Ph.D., GISP
>> Geospatial Software Guy  |  Elasticsearch
>> Apache Lucene PMC Member and Committer
>> nkn...@apache.org
>>
>
> --
> Uwe Schindler
> Achterdiek 19, 28357 Bremen
> https://www.thetaphi.de
>


Re: significant lucene benchmark regression: JDK11?

2019-04-25 Thread Uwe Schindler
Hi,

I am not sure how Mike's benchmarks are setup and if he chooses a specific 
garbage collector.

Java 8 defaults to ParallelGC, Java 11 defaults to G1, which may slow down up 
to 10% as it is not optimized for throughput.

So to compare you gave to be specific in your GC choices.

Uwe

Am April 25, 2019 5:57:16 PM UTC schrieb Nicholas Knize :
>Earlier this week I noticed a significant across the board performance
>regression on the nightly geo benchmarks
>. It appears this
>regression can also be seen on other lucene benchmarks
> and
>appears to correspond to the upgrade to JDK 11.
>
>Any thoughts?
>
>Nicholas Knize, Ph.D., GISP
>Geospatial Software Guy  |  Elasticsearch
>Apache Lucene PMC Member and Committer
>nkn...@apache.org

--
Uwe Schindler
Achterdiek 19, 28357 Bremen
https://www.thetaphi.de

[jira] [Created] (SOLR-13429) HashBasedRouter logs the entire state.json when a slice is not found

2019-04-25 Thread Noble Paul (JIRA)
Noble Paul created SOLR-13429:
-

 Summary: HashBasedRouter logs the entire state.json when a slice 
is not found
 Key: SOLR-13429
 URL: https://issues.apache.org/jira/browse/SOLR-13429
 Project: Solr
  Issue Type: Task
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Noble Paul



{code:java}
protected Slice hashToSlice(int hash, DocCollection collection) {
final Slice[] slices = collection.getActiveSlicesArr();
for (Slice slice : slices) {
  Range range = slice.getRange();
  if (range != null && range.includes(hash)) return slice;
}
throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "No active 
slice servicing hash code " + Integer.toHexString(hash) + " in " + collection);
  }
{code}

Just the collection name should be fine




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-25 Thread Scott Blum (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826483#comment-16826483
 ] 

Scott Blum commented on SOLR-13320:
---

+1!

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Improve performance of FST Arc traversal

2019-04-25 Thread Dawid Weiss
> I'm curious how the hot cache you describe would be maintained and
> accessed.

The only gain I was successful at was with static precomputed cache.
Calculated from a priori data (hot paths) or, in the extreme, the
entire FST is converted to a hash map. :) This allows very fast
lookups of any node-arc pair, but is not usable for enumerating
outgoing arcs (for example)... It's really something we did to
accelerate millions and millions of lookups over the FST (in an
otherwise not even Lucene-related data structure). I don't think it'll
be generally useful.

> I know eg that the current implementation has a cache of the
> first 128 "root" arcs, which I guess are presumed to be
> highly-visited, but I think what you are describing is a more dynamic
> cache based on usage?

I don't think it's an assumption of high-visited ratio... it's just a
limit so that we don't cache very extreme root node fan-outs. The
choice of this limit is very arbitrary I'd say...

> Were you thinking that one would maintain an LRU
> cache say? Or was this some offline analysis you did based on access
> patterns?

On patterns. So not generally useful.

Don't get me wrong -- try to experiment with that constant-expanded
array... This can be useful especially for nodes close to the root (if
they're dense)... So definitely worth looking into.

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (64bit/jdk-12) - Build # 465 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/465/
Java: 64bit/jdk-12 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestCloudRecovery2.test

Error Message:
 Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/17)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:43889/solr;,   
"node_name":"127.0.0.1:43889_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"http://127.0.0.1:43265/solr;,   
"node_name":"127.0.0.1:43265_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"} Live Nodes: [127.0.0.1:43265_solr, 127.0.0.1:43889_solr] 
Last available state: 
DocCollection(collection1//collections/collection1/state.json/17)={   
"pullReplicas":"0",   "replicationFactor":"2",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node3":{   "core":"collection1_shard1_replica_n1",   
"base_url":"http://127.0.0.1:43889/solr;,   
"node_name":"127.0.0.1:43889_solr",   "state":"down",   
"type":"NRT",   "force_set_state":"false"}, "core_node4":{  
 "core":"collection1_shard1_replica_n2",   
"base_url":"http://127.0.0.1:43265/solr;,   
"node_name":"127.0.0.1:43265_solr",   "state":"active",   
"type":"NRT",   "force_set_state":"false",   
"leader":"true",   "router":{"name":"compositeId"},   
"maxShardsPerNode":"2",   "autoAddReplicas":"false",   "nrtReplicas":"2",   
"tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: 
Timeout waiting to see state for collection=collection1 
:DocCollection(collection1//collections/collection1/state.json/17)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:43889/solr;,
  "node_name":"127.0.0.1:43889_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"http://127.0.0.1:43265/solr;,
  "node_name":"127.0.0.1:43265_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
Live Nodes: [127.0.0.1:43265_solr, 127.0.0.1:43889_solr]
Last available state: 
DocCollection(collection1//collections/collection1/state.json/17)={
  "pullReplicas":"0",
  "replicationFactor":"2",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node3":{
  "core":"collection1_shard1_replica_n1",
  "base_url":"http://127.0.0.1:43889/solr;,
  "node_name":"127.0.0.1:43889_solr",
  "state":"down",
  "type":"NRT",
  "force_set_state":"false"},
"core_node4":{
  "core":"collection1_shard1_replica_n2",
  "base_url":"http://127.0.0.1:43265/solr;,
  "node_name":"127.0.0.1:43265_solr",
  "state":"active",
  "type":"NRT",
  "force_set_state":"false",
  "leader":"true",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"2",
  "autoAddReplicas":"false",
  "nrtReplicas":"2",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([4A30FC9B74CDB1A9:C264C341DA31DC51]:0)
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:310)
at 
org.apache.solr.cloud.SolrCloudTestCase.waitForState(SolrCloudTestCase.java:288)
at 
org.apache.solr.cloud.TestCloudRecovery2.test(TestCloudRecovery2.java:106)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:567)
at 

[jira] [Commented] (SOLR-13240) UTILIZENODE action results in an exception

2019-04-25 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826461#comment-16826461
 ] 

matthew medway commented on SOLR-13240:
---

I'm also having this problem in version 7.5.  I'm not exactly sure how exactly 
I triggered it to happen but I added two new solr web nodes to my cluster (3 to 
5 nodes total), one of them auto-scaled correctly and moved replicas to it, the 
other one just sat there and did nothing.

I can run the command for the new node:

*curl 
'http://localhost:8983/solr/admin/collections?action=UTILIZENODE=172.27.9.167:8983_solr'*
 

from any of the servers and i receive the same error as you guys:

*"Operation utilizenode caused 
exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
 Comparison method violates its general contract!"*

I tried to use your patch but i don't think i did it correctly.  Here were my 
steps:



#patching solr 7.5
mkdir solrbuild
cd solrbuild
apt install ant -y
wget http://archive.apache.org/dist/lucene/solr/7.5.0/solr-7.5.0-src.tgz
tar -xvf solr-7.5.0-src.tgz
#cd 
solr-7.5.0/solr-7.5.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/
#apply the patch
#wget 
'https://issues.apache.org/jira/secure/attachment/12963112/SOLR-13240.patch' -O 
- | patch -p1 --dry-run
nano 
solr-7.5.0/solr/solrj/src/java/org/apache/solr/client/solrj/cloud/autoscaling/MoveReplicaSuggester.java
#update the code
cd solr-7.5.0/
ant ivy-bootstrap
ant compile
cd solr
ant server
tar -cvzf /home/ubuntu/solrbuild/solr-7.5.0.tgz 
/home/ubuntu/solrbuild/solr-7.5.0/solr
#download the file to the servers you need to install it to. it should be about 
160mb

#install the patched version over top of the existing
mkdir -p /home/ubuntu/solrbuild/
chmod 666 /home/ubuntu/solrbuild/
/home/ubuntu/solr-7.5.0/bin/install_solr_service.sh 
/home/ubuntu/solrbuild/solr-7.5.0.tgz -f

What do you guys think? I'm out of ideas and dont really want to move 300+ 
replicas manually. 

Thanks!

-Matt

> UTILIZENODE action results in an exception
> --
>
> Key: SOLR-13240
> URL: https://issues.apache.org/jira/browse/SOLR-13240
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.6
>Reporter: Hendrik Haddorp
>Priority: Major
> Attachments: SOLR-13240.patch
>
>
> When I invoke the UTILIZENODE action the REST call fails like this after it 
> moved a few replicas:
> {
>   "responseHeader":{
> "status":500,
> "QTime":40220},
>   "Operation utilizenode caused 
> exception:":"java.lang.IllegalArgumentException:java.lang.IllegalArgumentException:
>  Comparison method violates its general contract!",
>   "exception":{
> "msg":"Comparison method violates its general contract!",
> "rspCode":-1},
>   "error":{
> "metadata":[
>   "error-class","org.apache.solr.common.SolrException",
>   "root-error-class","org.apache.solr.common.SolrException"],
> "msg":"Comparison method violates its general contract!",
> "trace":"org.apache.solr.common.SolrException: Comparison method violates 
> its general contract!\n\tat 
> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:274)\n\tat
>  
> org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:246)\n\tat
>  
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat
>  
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:734)\n\tat 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:715)\n\tat
>  org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:496)\n\tat 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)\n\tat
>  
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)\n\tat
>  
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat
>  
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
>  
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat
>  
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)\n\tat
>  
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat
>  
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)\n\tat
>  
> 

Re: Improve performance of FST Arc traversal

2019-04-25 Thread Michael Sokolov
Hi Dawid,

The heuristic I used was to encode using the direct-array approach
when more than 1/4 of the array indices would be filled (ie max-label
- min-label / num-labels < 4), and otherwise to use the existing
packed array encoding. I only applied the direct encoding when we
previously would have used the fixed-size arc arrays *and* this
density heuristic was met. I experimented with other "load factors,"
and it's true that the sweet spot varies, but that seemed to lead to a
good compromise across a variety of test cases.

I'm curious how the hot cache you describe would be maintained and
accessed. I know eg that the current implementation has a cache of the
first 128 "root" arcs, which I guess are presumed to be
highly-visited, but I think what you are describing is a more dynamic
cache based on usage? Were you thinking that one would maintain an LRU
cache say? Or was this some offline analysis you did based on access
patterns?

On Thu, Apr 25, 2019 at 4:32 PM Dawid Weiss  wrote:
>
> Hi Mike,
>
> My experience tells me that in practice it's really difficult to tell
> which nodes should be expanded (where this "cost" of binary lookup
> would significantly outweight a direct offset jump). I had some luck
> in speeding up (very intensive) lookups by creating a hash of [node,
> arc label] => node for those paths which were frequently accessed...
> perhaps such a "hot path" cache would be better (compared to static
> expansion of all outgoing arcs)?
>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Improve performance of FST Arc traversal

2019-04-25 Thread Dawid Weiss
Hi Mike,

My experience tells me that in practice it's really difficult to tell
which nodes should be expanded (where this "cost" of binary lookup
would significantly outweight a direct offset jump). I had some luck
in speeding up (very intensive) lookups by creating a hash of [node,
arc label] => node for those paths which were frequently accessed...
perhaps such a "hot path" cache would be better (compared to static
expansion of all outgoing arcs)?

Dawid

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Improve performance of FST Arc traversal

2019-04-25 Thread Michael Sokolov
I've been experimenting with a new FST encoding, and the performance
gains are exciting on FST-intensive benchmarks like for the suggesters
and for PKLookup in luceneutil. In our production system we see some
gains in regular search performance as well, although these are modest
since FST lookup is not a major component of our costs there. The
gains vary a lot depending on the workload and the makeup of the terms
encoded in the FST, but for example I've seen as much as a 2x speedup
in Fuzzy Suggester performance. I'd like to open an issue to discuss
further, and share a PR, but here's a quick summary of what the
proposed change is:

FST is basically a graph composed of Arcs. Today, outgoing Arcs of a
given Arc can be encoded in two ways: either as a list of
variable-length-encoded Arcs, or as an array of fixed-length-encoded
Arcs. When seeking in the FST (eg looking up a term), one matches
successive characters against the Arc labels in the graph. If Arcs are
encoded in the list format, we scan each item in the list to find a
matching label, terminating early since they are ordered (by label).
When Arcs are in the array format, we use a binary search to find an
Arc with a matching label. The array format is used when there are a
relatively larger number of Arcs, and more so nearer the start of FST
graph.The new "direct array" encoding stores outgoing Arcs in a
full-sized array big enough to cover the complete span of outgoing
labels, so we can address directly by label and avoid the binary
search. Generally such an array will have some gaps, so this
fundamentally offers a space/time tradeoff.

Unless there is some strenuous objection (FST must not be touched!),
I'll open an issue soon and post a patch for comments.

-Mike

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: significant lucene benchmark regression: JDK11?

2019-04-25 Thread Michael Sokolov
Strangely LatLonShape seems to move in the opposite direction, or was
that due to a known functional change?

On Thu, Apr 25, 2019 at 3:33 PM Robert Muir  wrote:
>
> looks to me like the default garbage collector may play a part in
> this? look at JIT/gc times
>
> https://home.apache.org/~mikemccand/lucenebench/indexing.html
>
> On Thu, Apr 25, 2019 at 1:57 PM Nicholas Knize  wrote:
> >
> > Earlier this week I noticed a significant across the board performance 
> > regression on the nightly geo benchmarks. It appears this regression can 
> > also be seen on other lucene benchmarks and appears to correspond to the 
> > upgrade to JDK 11.
> >
> > Any thoughts?
> >
> > Nicholas Knize, Ph.D., GISP
> > Geospatial Software Guy  |  Elasticsearch
> > Apache Lucene PMC Member and Committer
> > nkn...@apache.org
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: significant lucene benchmark regression: JDK11?

2019-04-25 Thread Robert Muir
looks to me like the default garbage collector may play a part in
this? look at JIT/gc times

https://home.apache.org/~mikemccand/lucenebench/indexing.html

On Thu, Apr 25, 2019 at 1:57 PM Nicholas Knize  wrote:
>
> Earlier this week I noticed a significant across the board performance 
> regression on the nightly geo benchmarks. It appears this regression can also 
> be seen on other lucene benchmarks and appears to correspond to the upgrade 
> to JDK 11.
>
> Any thoughts?
>
> Nicholas Knize, Ph.D., GISP
> Geospatial Software Guy  |  Elasticsearch
> Apache Lucene PMC Member and Committer
> nkn...@apache.org

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] risdenk commented on issue #655: SOLR-13414: SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread GitBox
risdenk commented on issue #655: SOLR-13414: SolrSchema - Avoid NPE if Luke 
returns field with no type defined
URL: https://github.com/apache/lucene-solr/pull/655#issuecomment-486786464
 
 
   FYI @joel-bernstein 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re:Fix license link on project website

2019-04-25 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Thanks Jan for noticing and offering to fix! A long time ago I also noticed the 
not-all-green but beyond opening 
https://issues.apache.org/jira/browse/LUCENE-7829 never got to do anything else 
about it ...

Christine

From: dev@lucene.apache.org At: 04/23/19 16:12:16To:  dev@lucene.apache.org
Subject: Fix license link on project website

Apache project sites are required to include certain content, and here is the 
check result for Lucene:
https://whimsy.apache.org/site/project/lucene with one yellow check.

To make it green I'm going to fix the License link on the site from 
  http://www.apache.org/licenses/LICENSE-2.0 
to 
  https://www.apache.org/licenses/

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Attachment: SOLR-13414.patch

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, SOLR-13414.patch, 
> before_starting_solr.png, command_prompt.png, luke_out.xml, managed-schema, 
> new_solr-8983-console.log, new_solr.log, solr-8983-console.log, 
> solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> *Summary*
> If the underlying Lucene index has fields defined but no type, SolrSchema 
> fails with NPE. The index most likely has issues and would be better to 
> delete/recreate the index. This ticket adds a null check to prevent the NPE 
> and won't break on a potentially invalid index.
> *Initial Description*
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> 

[GitHub] [lucene-solr] risdenk opened a new pull request #655: SOLR-13414: SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread GitBox
risdenk opened a new pull request #655: SOLR-13414: SolrSchema - Avoid NPE if 
Luke returns field with no type defined
URL: https://github.com/apache/lucene-solr/pull/655
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



significant lucene benchmark regression: JDK11?

2019-04-25 Thread Nicholas Knize
Earlier this week I noticed a significant across the board performance
regression on the nightly geo benchmarks
. It appears this
regression can also be seen on other lucene benchmarks
 and
appears to correspond to the upgrade to JDK 11.

Any thoughts?

Nicholas Knize, Ph.D., GISP
Geospatial Software Guy  |  Elasticsearch
Apache Lucene PMC Member and Committer
nkn...@apache.org


[jira] [Created] (SOLR-13428) Take the WARN message out of the logs when optimizing.

2019-04-25 Thread Erick Erickson (JIRA)
Erick Erickson created SOLR-13428:
-

 Summary: Take the WARN message out of the logs when optimizing.
 Key: SOLR-13428
 URL: https://issues.apache.org/jira/browse/SOLR-13428
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Erick Erickson
Assignee: Erick Erickson


I think this is both unnecessary and produces unnecessary angst. Users can't 
get themselves into the situation where they have oversize segments unless they 
take explicit action any more.  And since the big red "optimize" button is 
gone, we can at least reasonably expect that they've at least read the ref 
guide to even know there's an optimize option that produces oversize segments.

Also, update the ref guide, particularly the "Index Replication" section where 
it mentions optimization.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13328) HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates connection

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13328?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13328:

Fix Version/s: (was: 8.0.1)
   (was: 8.1)

> HostnameVerifier in HttpClientBuilder is ignored when HttpClientUtil creates 
> connection
> ---
>
> Key: SOLR-13328
> URL: https://issues.apache.org/jira/browse/SOLR-13328
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.0
>Reporter: jefferyyuan
>Priority: Minor
>
> In SolrHttpClientBuilder, we can configure a lot of things including 
> HostnameVerifier.
> We have code like below:
> HttpClientUtil.setHttpClientBuilder(new CommonNameVerifierClientConfigurer());
> CommonNameVerifierClientConfigurer will set our own HostnameVerifier which 
> checks subject dn name.
> But this doesn't work as when we create SSLConnectionSocketFactory at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry() we don't 
> check and use HostnameVerifier in SolrHttpClientBuilder at all.
> The fix would be very simple, at 
> HttpClientUtil.DefaultSchemaRegistryProvider.getSchemaRegistry, if 
> HostnameVerifier in SolrHttpClientBuilder is not null, use it, otherwise same 
> logic as before.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Description: 
*Summary*
If the underlying Lucene index has fields defined but no type, SolrSchema fails 
with NPE. The index most likely has issues and would be better to 
delete/recreate the index. This ticket adds a null check to prevent the NPE and 
won't break on a potentially invalid index.

*Initial Description*
When attempting to create a JDBC sql query against a large collection (400m + 
records) we get a null error.

After [initial discussion in 
solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
 I have been asked to open this ticket - The exception thrown does not provide 
sufficient detail to understand the underlying problem. Its it thought to be an 
issue with the schema not initialising correctly. 

Attached is the managed-schema after a downconfig.

Stack trace from email thread:

*Solr Admin UI Logging*
{code:java}
java.io.IOException: Failed to execute sqlQuery 'select id from document limit 
10' against JDBC connection 'jdbc:calcitesolr:'.
Error while executing SQL "select id from document limit 10": null
at 
org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
at 
org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
at 
org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
at 
org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
at 
org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
at 
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
at 
org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
at org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
at 
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
at 
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
at 
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:502)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:364)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)
 

[jira] [Commented] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread David Barnett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826275#comment-16826275
 ] 

David Barnett commented on SOLR-13414:
--

Thank you


> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Priority: Minor  (was: Major)

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Minor
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
>

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Fix Version/s: master (9.0)
   8.1
   7.7.2

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

[jira] [Updated] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-13414:

Summary: SolrSchema - Avoid NPE if Luke returns field with no type defined  
(was: Sql Schema is not initializing)

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> 

[jira] [Assigned] (SOLR-13414) SolrSchema - Avoid NPE if Luke returns field with no type defined

2019-04-25 Thread Kevin Risden (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden reassigned SOLR-13414:
---

Assignee: Kevin Risden

> SolrSchema - Avoid NPE if Luke returns field with no type defined
> -
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Assignee: Kevin Risden
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[JENKINS] Lucene-Solr-Tests-8.x - Build # 167 - Unstable

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-8.x/167/

4 tests failed.
FAILED:  
org.apache.solr.analytics.legacy.facet.LegacyFieldFacetCloudTest.meanTest

Error Message:
Error starting up MiniSolrCloudCluster

Stack Trace:
java.lang.Exception: Error starting up MiniSolrCloudCluster
at 
org.apache.solr.cloud.MiniSolrCloudCluster.checkForExceptions(MiniSolrCloudCluster.java:652)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:306)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.build(SolrCloudTestCase.java:212)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:204)
at 
org.apache.solr.analytics.legacy.LegacyAbstractAnalyticsCloudTest.setupCollection(LegacyAbstractAnalyticsCloudTest.java:49)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.RuntimeException: Jetty/Solr unresponsive
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:507)
at 

[jira] [Commented] (LUCENE-8566) Deprecate methods in CustomAnalyzer.Builder which take factory classes

2019-04-25 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826265#comment-16826265
 ] 

Tomoko Uchida commented on LUCENE-8566:
---

Hi [~thetaphi],

I opened a sub-issue: https://issues.apache.org/jira/browse/LUCENE-8778

Could you check it and give some feedback about this change?

> Deprecate methods in CustomAnalyzer.Builder which take factory classes
> --
>
> Key: LUCENE-8566
> URL: https://issues.apache.org/jira/browse/LUCENE-8566
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Minor
>
> CustomAnalyzer.Builder has methods which take implementation classes as 
> follows.
>  - withTokenizer(Class factory, String... params)
>  - withTokenizer(Class factory, 
> Map params)
>  - addTokenFilter(Class factory, String... 
> params)
>  - addTokenFilter(Class factory, 
> Map params)
>  - addCharFilter(Class factory, String... params)
>  - addCharFilter(Class factory, 
> Map params)
> Since the builder also has methods which take service names, it seems like 
> that above methods are unnecessary and a little bit misleading. Giving 
> symbolic names is preferable to implementation factory classes, but for now, 
> users can write code depending on implementation classes.
> What do you think about deprecating those methods (adding {{@Deprecated}} 
> annotations) and deleting them in the future releases? Those are called by 
> only test cases so deleting them should have no impact on current lucene/solr 
> codebase.
> If this proposal gains your consent, I will create a patch. (Let me know if I 
> missed some point. I'll close it.)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826264#comment-16826264
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - we can use this ticket to add the null check. Will rename the 
title and can put a quick patch together.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826256#comment-16826256
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thanks [~janhoy], I see.
I will read the PKI concept in detail then, looks like the most viable solution 
for what we are trying to achieve here. I will try to bake a patch on the same.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8778) Define analyzer SPI names as static final fields and document the names in Javadocs

2019-04-25 Thread Tomoko Uchida (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826254#comment-16826254
 ] 

Tomoko Uchida commented on LUCENE-8778:
---

To begin with, I only changed char filters for design review.

[https://github.com/apache/lucene-solr/pull/654]

 

The custom Javadoc tag generates HTML like this.

!Screenshot from 2019-04-26 02-17-48.png!

 

 

> Define analyzer SPI names as static final fields and document the names in 
> Javadocs
> ---
>
> Key: LUCENE-8778
> URL: https://issues.apache.org/jira/browse/LUCENE-8778
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Priority: Minor
> Attachments: Screenshot from 2019-04-26 02-17-48.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Each built-in analysis component (factory of tokenizer / char filter / token 
> filter)  has a SPI name but currently this is not  documented anywhere.
> The goals of this issue:
>  - Define SPI names as static final field for each analysis component so that 
> users can get the component by name (via {{NAME}} static field.) This also 
> provides compile time safety.
>  - Officially document the SPI names in Javadocs.
>  - Add proper source validation rules to ant {{validate-source-patterns}} 
> target so that we can make sure that all analysis components have correct 
> field definitions and documentation
> (Just for quick reference) we now have:
>  * *19* Tokenizers ({{TokenizerFactory.availableTokenizers()}})
>  * *6* CharFilters ({{CharFilterFactory.availableCharFilters()}})
>  * *118* TokenFilters ({{TokenFilterFactory.availableTokenFilters()}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8778) Define analyzer SPI names as static final fields and document the names in Javadocs

2019-04-25 Thread Tomoko Uchida (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8778?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida updated LUCENE-8778:
--
Attachment: Screenshot from 2019-04-26 02-17-48.png

> Define analyzer SPI names as static final fields and document the names in 
> Javadocs
> ---
>
> Key: LUCENE-8778
> URL: https://issues.apache.org/jira/browse/LUCENE-8778
> Project: Lucene - Core
>  Issue Type: Task
>  Components: modules/analysis
>Reporter: Tomoko Uchida
>Priority: Minor
> Attachments: Screenshot from 2019-04-26 02-17-48.png
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Each built-in analysis component (factory of tokenizer / char filter / token 
> filter)  has a SPI name but currently this is not  documented anywhere.
> The goals of this issue:
>  - Define SPI names as static final field for each analysis component so that 
> users can get the component by name (via {{NAME}} static field.) This also 
> provides compile time safety.
>  - Officially document the SPI names in Javadocs.
>  - Add proper source validation rules to ant {{validate-source-patterns}} 
> target so that we can make sure that all analysis components have correct 
> field definitions and documentation
> (Just for quick reference) we now have:
>  * *19* Tokenizers ({{TokenizerFactory.availableTokenizers()}})
>  * *6* CharFilters ({{CharFilterFactory.availableCharFilters()}})
>  * *118* TokenFilters ({{TokenFilterFactory.availableTokenFilters()}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on issue #654: LUCENE-8778: Define analyzer SPI names as static final fields and document the names

2019-04-25 Thread GitBox
mocobeta commented on issue #654: LUCENE-8778: Define analyzer SPI names as 
static final fields and document the names
URL: https://github.com/apache/lucene-solr/pull/654#issuecomment-486757857
 
 
   To begin with, I only changed char filters for design review.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta opened a new pull request #654: LUCENE-8778: Define analyzer SPI names as static final fields and document the names

2019-04-25 Thread GitBox
mocobeta opened a new pull request #654: LUCENE-8778: Define analyzer SPI names 
as static final fields and document the names
URL: https://github.com/apache/lucene-solr/pull/654
 
 
   See: https://issues.apache.org/jira/browse/LUCENE-8778
   
   Changes in this PR:
   - Define SPI names as static final fields.
   - Document SPI names by custom Javadoc tag `@lucene.spi`.
   - Add validation rules for `NAME` field definition and the Javadoc tag.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8778) Define analyzer SPI names as static final fields and document the names in Javadocs

2019-04-25 Thread Tomoko Uchida (JIRA)
Tomoko Uchida created LUCENE-8778:
-

 Summary: Define analyzer SPI names as static final fields and 
document the names in Javadocs
 Key: LUCENE-8778
 URL: https://issues.apache.org/jira/browse/LUCENE-8778
 Project: Lucene - Core
  Issue Type: Task
  Components: modules/analysis
Reporter: Tomoko Uchida


Each built-in analysis component (factory of tokenizer / char filter / token 
filter)  has a SPI name but currently this is not  documented anywhere.

The goals of this issue:
 - Define SPI names as static final field for each analysis component so that 
users can get the component by name (via {{NAME}} static field.) This also 
provides compile time safety.
 - Officially document the SPI names in Javadocs.
 - Add proper source validation rules to ant {{validate-source-patterns}} 
target so that we can make sure that all analysis components have correct field 
definitions and documentation

(Just for quick reference) we now have:
 * *19* Tokenizers ({{TokenizerFactory.availableTokenizers()}})
 * *6* CharFilters ({{CharFilterFactory.availableCharFilters()}})
 * *118* TokenFilters ({{TokenFilterFactory.availableTokenFilters()}})



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13421) Intermittent error 401 with JSON Facet query to retrieve count all collections

2019-04-25 Thread Edwin Yeo Zheng Lin (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13421?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Edwin Yeo Zheng Lin updated SOLR-13421:
---
Description: 
I am using the below JSON Facet to retrieve the count of all the different 
collections in one query.
  
 
[https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1,https://localhost:8983/solr/collection2,https://localhost:8983/solr/collection3,https://localhost:8983/solr/collection4,https://localhost:8983/solr/collection5,https://localhost:8983/solr/collection6=0={categories|https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1,https://localhost:8983/solr/collection2,https://localhost:8983/solr/collection3,https://localhost:8983/solr/collection4,https://localhost:8983/solr/collection5,https://localhost:8983/solr/collection6=0=%7Bcategories]
 : \{type : terms,field : content_type,limit : 100}}
  
  
 Previously, in Solr 7.6 and Solr 7.7, this query can work correctly and we are 
able to produce the correct output.
  
 {
   "responseHeader":

{     "zkConnected":true,     "status":0,     "QTime":24}

,
   "response":

{"numFound":41200,"start":0,"maxScore":12.993215,"docs":[]   }

,
   "facets":{
     "count":41200,
     "categories":{
       "buckets":[

{           "val":"collection1",           "count":26213}

,
        

{           "val":"collection2",           "count":12075}

,
        

{           "val":"collection3",           "count":1947}

,
        

{           "val":"collection4",           "count":850}

,
        

{           "val":"collection5",           "count":111}

,
        

{           "val":"collection6",           "count":4}

]}}}
  
  
 However, in the new Solr 8.0.0, this query can only work if we put only one 
collection in the shards (can be any collection). If we put 2 collections, 
there will not be error 90% of the time (only 10% of the time the issue will 
occur with the 'Error 401 require authentication').

However, once we put 3 or more collections (can be any of the collections), 
this issue of 'Error 401 require authentication' will keep occurring.

 
 {
   "responseHeader":

{     "zkConnected":true,     "status":401,     "QTime":11}

,
   "error":{
     "metadata":[
       
"error-class","org.apache.solr.client.solrj.impl.Http2SolrClient$RemoteSolrException",
       
"root-error-class","org.apache.solr.client.solrj.impl.Http2SolrClient$RemoteSolrException"],
     "msg":"Error from server at null: Expected mime type 
application/octet-stream but got text/html. \n\n\nError 
401 require authentication\n\nHTTP ERROR 
401\nProblem accessing /solr/collection6/select. Reason:\n    
require authentication\n\n\n",
     "code":401}}
  
 This issue does not occur in Solr 7.6 and Solr 7.7, even though I have set up 
the same authentication for all the versions.
  
  

Below is the format of my security.json:
  
 {
 "authentication":

{    "blockUnknown": true,    "class":"solr.BasicAuthPlugin",    "credentials":

{"user1":"hyHXXuJSqcZdNgdSTGUvrQZRpqrYFUQ2ffmlWQ4GUTk= 
E0w3/2FD+rlxulbPm2G7i9HZqT+2gMBzcyJCcGcMWwA="}

},
 "authorization":

{    "class":"solr.RuleBasedAuthorizationPlugin",    "user-role":

{"user1":"admin"}

,
    "permissions":[

{"name":"security-edit",                   "role":"admin"}

]
 }}

  was:
I am using the below JSON Facet to retrieve the count of all the different 
collections in one query.
  
 
[https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1,https://localhost:8983/solr/collection2,https://localhost:8983/solr/collection3,https://localhost:8983/solr/collection4,https://localhost:8983/solr/collection5,https://localhost:8983/solr/collection6=0={categories|https://localhost:8983/solr/collection1/select?q=testing=https://localhost:8983/solr/collection1,https://localhost:8983/solr/collection2,https://localhost:8983/solr/collection3,https://localhost:8983/solr/collection4,https://localhost:8983/solr/collection5,https://localhost:8983/solr/collection6=0=%7Bcategories]
 : \{type : terms,field : content_type,limit : 100}}
  
  
 Previously, in Solr 7.6 and Solr 7.7, this query can work correctly and we are 
able to produce the correct output.
  
 {
   "responseHeader":

{     "zkConnected":true,     "status":0,     "QTime":24}

,
   "response":

{"numFound":41200,"start":0,"maxScore":12.993215,"docs":[]   }

,
   "facets":{
     "count":41200,
     "categories":{
       "buckets":[

{           "val":"collection1",           "count":26213}

,
        

{           "val":"collection2",           "count":12075}

,
        

{           "val":"collection3",           "count":1947}

,
        

{           "val":"collection4",           "count":850}

,
        

{           "val":"collection5",           "count":111}

,
        

{           "val":"collection6",           "count":4}

]}}}
  
  
 However, in the 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826214#comment-16826214
 ] 

Erick Erickson commented on SOLR-13414:
---

David:

There is no good way to purge fields completely from an index. At the Lucene 
level, each segment is a complete, mini index which includes all fields, so you 
can still have two separate fields County and COUNTY.

Worse, the actual way the field is defined is _also_ on a per-segment basis, so 
switching the field type is not guaranteed at all to always work seamlessly. It 
depends IIUC on what order the segments get merged in. Say Lucene is merging 
two segments, one in which the field type for County is text and one for 
string. IDK which definition the merged segment respects. We see this all the 
time when someone changes a docValues multiValued option from true to false or 
vice-versa.

Even when there are no obvious errors reported, the response can be inaccurate. 
Let's take a case of faceting where the underlying field is changed from text 
to string. For segments where the text based field is faceted, there'll be a 
bucket for each and every word in the field, but for the segments with string, 
there will be exactly one bucket per doc. If that scenario doesn't report an 
error.

For almost all changes, I strongly recommend you blow away your index and start 
over. Recreate the collection and re-index in SolrCloud terms. There are a few 
exceptions, but they're so tricky that it's often much easier and faster to 
just use a sledgehammer.

And finally, the fact that you have COUNTY and County feels like you have 
either schemaless mode enabled or are using dynamic fields. I strongly urge you 
to disable schemaless before going to production. It's fine for getting 
started, but for prod situations I'd rather take more control over exactly what 
my index looks like.

If at all possible I like to explicitly define every field so things like this 
are caught at index time, but that's not always practical. Dynamic fields are 
preferable to schemaless in that situation though.

Best,
Erick



> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread David Barnett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826210#comment-16826210
 ] 

David Barnett commented on SOLR-13414:
--

Do you want me to open one Kevin / Joel ?


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
>  

[jira] [Commented] (SOLR-9769) solr stop on a service already stopped should return exit code 0

2019-04-25 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826209#comment-16826209
 ] 

Erick Erickson commented on SOLR-9769:
--

I come down on the side of reporting an error if Solr is already stopped. 

{quote}
What should be considered a success when calling stop? When I am calling stop I 
want to make sure that solr is not running. That is the state i want to 
transition to. So when solr is not running after calling stop that is a 
success. When after calling stop solr is still running that is a failure.
{quote}

The place this argument falls down is when, say, I want to stop a particular 
Solr on a particular port and don't enter the proper port. Example:  'bin/solr 
stop -p 8984' but Solr is running on 8983. I am _not_ in the state I intend, 
which is that my Solr running on port 8983 is stopped. At least when an error 
is reported because no solr is running on 8984 I have a clue that the result I 
wanted isn't the result I got.

There is quite a bit of room for the exit codes being regularized. This is not 
a change I'd encourage though.



> solr stop on a service already stopped should return exit code 0
> 
>
> Key: SOLR-9769
> URL: https://issues.apache.org/jira/browse/SOLR-9769
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.3
>Reporter: Jiří Pejchal
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> According to the LSB specification
> https://refspecs.linuxfoundation.org/LSB_4.0.0/LSB-Core-generic/LSB-Core-generic.html#INISCRPTACT
>  running stop on a service already stopped or not running should be 
> considered successful and return code should be 0 (zero).
> Solr currently returns exit code 1:
> {code}
> $ /etc/init.d/solr stop; echo $?
> Sending stop command to Solr running on port 8983 ... waiting up to 180 
> seconds to allow Jetty process 4277 to stop gracefully.
> 0
> $ /etc/init.d/solr stop; echo $?
> No process found for Solr node running on port 8983
> 1
> {code}
> {code:title="bin/solr"}
> if [ "$SOLR_PID" != "" ]; then
> stop_solr "$SOLR_SERVER_DIR" "$SOLR_PORT" "$STOP_KEY" "$SOLR_PID"
>   else
> if [ "$SCRIPT_CMD" == "stop" ]; then
>   echo -e "No process found for Solr node running on port $SOLR_PORT"
>   exit 1
> fi
>   fi
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826206#comment-16826206
 ] 

Kevin Risden commented on SOLR-13414:
-

I think a reasonable fix would be to add a null check before the switch 
statement

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L103

This would prevent adding the field as an option in SQL and avoid the issue you 
ran into.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread David Barnett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826199#comment-16826199
 ] 

David Barnett commented on SOLR-13414:
--

Thanks again

Would this bug spawn an enhancement request for better handling from the SQL 
interface ?

Cheers


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826193#comment-16826193
 ] 

Kevin Risden commented on SOLR-13414:
-

So Luke is looking at the actual index files. I would guess somewhere along the 
way in Solr, COUNTY was defined and then deleted or changed to be County. I 
think there were documents indexed as some point with the field name COUNTY. 
These documents were deleted but segments still have SOME reference to COUNTY 
(ie: not merged and fully removing the indexed documents).

Long story short - I don't know of a way to delete that field fully from the 
Lucene index under the hood.

The workaround of adding the field back works, but then you could end up with 
documents in either COUNTY or County. I'd actually be curious how Solr handles 
two fields with the same name different case when querying.

I think the SQL integration could add a check to make sure that we handle this 
case better, but it does highlight an interesting case with the underlying 
index.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[JENKINS] Lucene-Solr-repro-Java11 - Build # 26 - Unstable

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro-Java11/26/

[...truncated 29 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1831/consoleText

[repro] Revision: 48dc020ddaf0b0911012b4d9b77d859b2af3d3ae

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=SpellCheckCollatorTest 
-Dtests.method=testEstimatedHitCounts -Dtests.seed=115D79AC86D20649 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ky -Dtests.timezone=US/Michigan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.method=testSplitWithChaosMonkey -Dtests.seed=115D79AC86D20649 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=gd-GB -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] Repro line:  ant test  -Dtestcase=ShardSplitTest 
-Dtests.seed=115D79AC86D20649 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=gd-GB -Dtests.timezone=America/Indiana/Vincennes 
-Dtests.asserts=true -Dtests.file.encoding=US-ASCII

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
ef79dd548d410dde90235b56fe6d7ad5adb351f3
[repro] git fetch
[repro] git checkout 48dc020ddaf0b0911012b4d9b77d859b2af3d3ae

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   SpellCheckCollatorTest
[repro]   ShardSplitTest
[repro] ant compile-test

[...truncated 3309 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.SpellCheckCollatorTest|*.ShardSplitTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=115D79AC86D20649 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ky -Dtests.timezone=US/Michigan -Dtests.asserts=true 
-Dtests.file.encoding=US-ASCII

[...truncated 77226 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.api.collections.ShardSplitTest
[repro]   2/5 failed: org.apache.solr.spelling.SpellCheckCollatorTest
[repro] git checkout ef79dd548d410dde90235b56fe6d7ad5adb351f3

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread David Barnett (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826184#comment-16826184
 ] 

David Barnett commented on SOLR-13414:
--

Hi Kevin

Running the second command with a different output looks like this:


• COMMODITY3:
{
• type: "string",
• schema: "I-SDU-OF-l",
• index: "ITS---OF--",
• docs: 369984
},
• COUNTY:
{
• type: null,
• schema: "--"
},
• CRS_Name:
{
• type: "string",
• schema: "I-SDU-OF-l",
• index: "ITS---OF--",
• docs: 1
},


Yes we have many indexer processes that ‘could’ refer to the same Field Upper 
or lower case (if we didn’t spot it in the modelling at some point)

So to remedy this I just remove the field with the null Type ? I tried through 
the Admin UI but get a message saying "error processing commands”

I was not able to delete the Field as I say but I was able to Add it again with 
the Type set to String and the SQL now works !

However - in this scenario it would be useful to understand the best way to 
remove this misinformed Field definition

Really appreciate all you help with this!

Dave



> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826170#comment-16826170
 ] 

Kevin Risden commented on SOLR-13414:
-

In the output I noticed something interesting:

There is both COUNTY and County - same field name different case. The 
managed-schema attached previously, only has County in it. 

So it looks like docs are indexed with different case field names? Reimporting 
would force all docs to have the right field name definition I think which is 
why you wouldn't see this issue after recreating the index.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> 

[jira] [Comment Edited] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden edited comment on SOLR-13414 at 4/25/19 3:30 PM:
--

[~davebarnett] - Can you run this query and share the results (it shouldn't 
have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78


was (Author: risdenk):
[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[jira] [Updated] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread David Barnett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Barnett updated SOLR-13414:
-
Attachment: luke_out.xml

Hi Kevin

Command ran just fine, I put it to XML and saved here Luke_out.xml

Here is a clip from the output showing COUNTY I’ve also taken CRS_Name too (to 
show difference) Is it missing a type definition (it should be String)



--


string
I-SDU-OF-l
ITS---OF--
1


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, luke_out.xml, managed-schema, new_solr-8983-console.log, 
> new_solr.log, solr-8983-console.log, solr-8983-console.log, 
> solr-core-7.8.0-SNAPSHOT.jar, solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826158#comment-16826158
 ] 

Kevin Risden commented on SOLR-13414:
-

Assuming the output of the above is correct, the issue might be with field 
"Field:COUNTY" since the way the debug logging works is that it would log that 
for each field before failing on the NPE.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
> at 
> 

[jira] [Comment Edited] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden edited comment on SOLR-13414 at 4/25/19 3:21 PM:
--

[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78


was (Author: risdenk):
[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826154#comment-16826154
 ] 

Kevin Risden commented on SOLR-13414:
-

[~davebarnett] - Can you run this query in your browser and share the results 
(it shouldn't have anything sensitive):


{code:java}
http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0http://SOLR_HOST:SOLR_PORT/solr/COLLECTION/admin/luke?numTerms=0
{code}

This should match what the output of the following is (in a slightly different 
format):

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L78

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> 

[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk-11.0.2) - Build # 5115 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/5115/
Java: 64bit/jdk-11.0.2 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
expected:<154> but was:<152>

Stack Trace:
java.lang.AssertionError: expected:<154> but was:<152>
at 
__randomizedtesting.SeedInfo.seed([151C1D6B9396620A:9D4822B13D6A0FF2]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:834)
at org.junit.Assert.assertEquals(Assert.java:645)
at org.junit.Assert.assertEquals(Assert.java:631)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:154)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826148#comment-16826148
 ] 

Kevin Risden commented on SOLR-13414:
-

Hmmm so does that mean that "luceneFieldInfo.getType()" is returning null and 
breaking the switch on line 103?

https://github.com/apache/lucene-solr/blob/branch_7_7/solr/core/src/java/org/apache/solr/handler/sql/SolrSchema.java#L103

The javadocs for LukeResponse.FieldInfo doesn't say anything about null 
guarantees. 
* 
https://lucene.apache.org/solr/7_7_0/solr-solrj/org/apache/solr/client/solrj/response/LukeResponse.FieldInfo.html#getType()

Checked the code and there is nothing stopping it from being a null there.
* 
https://github.com/apache/lucene-solr/blob/branch_7_7/solr/solrj/src/java/org/apache/solr/client/solrj/response/LukeResponse.java#L118

I think I can come up with a luke request and would get the same result for 
that collection so we can see what is getting returned. I think we should be 
able to do this without adding more logging yet.

> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> 

[JENKINS-MAVEN] Lucene-Solr-Maven-master #2546: POMs out of sync

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-master/2546/

No tests ran.

Build Log:
[...truncated 18054 lines...]
BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:673: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/build.xml:209: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/build.xml:408:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:1709:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-master/lucene/common-build.xml:581:
 Error deploying artifact 'org.apache.lucene:lucene-core:jar': Error deploying 
artifact: Error transferring file

Total time: 8 minutes 49 seconds
Build step 'Invoke Ant' marked build as failure
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-11) - Build # 23982 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/23982/
Java: 64bit/jdk-11 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {   "responseHeader":{ 
"status":0, "QTime":0},   "overlay":{ "znodeVersion":0, 
"runtimeLib":{"colltest":{ "name":"colltest", "version":1,  
from server:  null

Stack Trace:
java.lang.AssertionError: Could not get expected value  
'org.apache.solr.core.BlobStoreTestRequestHandler' for path 
'overlay/requestHandler/\/test1/class' full output: {
  "responseHeader":{
"status":0,
"QTime":0},
  "overlay":{
"znodeVersion":0,
"runtimeLib":{"colltest":{
"name":"colltest",
"version":1,  from server:  null
at 
__randomizedtesting.SeedInfo.seed([BAEB98D2740A5DEF:62A6B58583D7F84F]:0)
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:590)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:80)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Comment Edited] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826146#comment-16826146
 ] 

Jan Høydahl edited comment on SOLR-11959 at 4/25/19 3:10 PM:
-

Remember that the solution must also work with Kerberos, Hadoop, JWT and other 
Auth methods.

So perhaps better would be to extend the PKI concept to work cross cluster. 
Right now PKI will only accept requests from nodes found in local ZK. I could 
imagine a pluggable thing here that could allow nodes from another cluster e.g. 
by providing zk addr(s) for external trusted clusters. This must of course also 
support zk ACL etc.


was (Author: janhoy):
Remember that the solution must also work with Kerberos, Hadoop, JWT and other 
Auth methods.

So perhaps better would be to extend the PKI concept to work cross collection. 
Right now PKI will only accept requests from nodes found in local ZK. I could 
imagine a pluggable thing here that could allow nodes from another cluster e.g. 
by providing zk addr(s) for external trusted clusters. This must of course also 
support zk ACL etc.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826146#comment-16826146
 ] 

Jan Høydahl commented on SOLR-11959:


Remember that the solution must also work with Kerberos, Hadoop, JWT and other 
Auth methods.

So perhaps better would be to extend the PKI concept to work cross collection. 
Right now PKI will only accept requests from nodes found in local ZK. I could 
imagine a pluggable thing here that could allow nodes from another cluster e.g. 
by providing zk addr(s) for external trusted clusters. This must of course also 
support zk ACL etc.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826114#comment-16826114
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Since SOLR-8389 didn't get enough traction, I would like to complete this Jira 
with the existing design.

{{CdcrReplicator}} at the Source internally creates SolrClient for the target 
and issues UpdateRequest. We can pass details for Basic Auth in the classic 
manner, part of the Request Header.
For this to work -- 
1. We can put Basic Auth -- username password details for the target at the 
source, which can result in more security issues since plain text password will 
be mentioned in solrconfig.xml which is exposed at multiple facets, unlike 
security.json.
2. Read security.json of the target collection at source (since source cluster 
has all access to all the files at target), unhash the password and pass it in 
the UpdateRequest. At the solrconfig.xml level at source, we need to provide 
the user only, whose password will be fetched. This is a better security 
solution than above, as reading security doc for a cluster is restricted to one 
module, Cdcr.

Looking forward to feedback on this.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13425) Wrong color in horizontal definition list

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-13425.

Resolution: Fixed

> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.1, 8.0
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13425) Wrong color in horizontal definition list

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826094#comment-16826094
 ] 

ASF subversion and git services commented on SOLR-13425:


Commit b5a872d3fd55a67be16a9fc30671b29be0ece013 in lucene-solr's branch 
refs/heads/branch_8_0 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=b5a872d ]

SOLR-13425: Wrong color in horizontal definition list (#653)

(cherry picked from commit ef79dd548d410dde90235b56fe6d7ad5adb351f3)


> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 8.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13425) Wrong color in horizontal definition list

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826090#comment-16826090
 ] 

ASF subversion and git services commented on SOLR-13425:


Commit 1cf0439a24c55ee1c00ad0141cafd01a9550d8f8 in lucene-solr's branch 
refs/heads/branch_8x from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1cf0439 ]

SOLR-13425: Wrong color in horizontal definition list (#653)

(cherry picked from commit ef79dd548d410dde90235b56fe6d7ad5adb351f3)


> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 8.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13425) Wrong color in horizontal definition list

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826087#comment-16826087
 ] 

ASF subversion and git services commented on SOLR-13425:


Commit ef79dd548d410dde90235b56fe6d7ad5adb351f3 in lucene-solr's branch 
refs/heads/master from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=ef79dd5 ]

SOLR-13425: Wrong color in horizontal definition list (#653)




> Wrong color in horizontal definition list
> -
>
> Key: SOLR-13425
> URL: https://issues.apache.org/jira/browse/SOLR-13425
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
> Fix For: 8.0, 8.1
>
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> See 
> [https://lucene.apache.org/solr/guide/7_7/monitoring-solr-with-prometheus-and-grafana.html#configuration-tags-and-elements]
> The {{[horizontal]}} definition list ends up in a html table with keys in a 
> {{foo}} tag. The text here is white on white 
> background, since it inherits from the {{table th code}} rule in 
> {{customstyles.css}}
> A possible fix is to set black bold in ref-guide.css, see PR.
> [~ctargett]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] [lucene-solr] janhoy merged pull request #653: SOLR-13425: Wrong color in horizontal definition list

2019-04-25 Thread GitBox
janhoy merged pull request #653: SOLR-13425: Wrong color in horizontal 
definition list
URL: https://github.com/apache/lucene-solr/pull/653
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 3309 - Unstable

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/3309/

2 tests failed.
FAILED:  org.apache.solr.cloud.SystemCollectionCompatTest.testBackCompat

Error Message:
Error from server at http://127.0.0.1:46582/solr/.system: Error reading input 
String Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:46582/solr/.system: Error reading input String 
Can't find resource 'schema.xml' in classpath or '/configs/.system', 
cwd=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-master/solr/build/solr-core/test/J2
at 
__randomizedtesting.SeedInfo.seed([52AB4834EA31666:75DF172A2E6BBF10]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:649)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.doRequest(LBSolrClient.java:368)
at 
org.apache.solr.client.solrj.impl.LBSolrClient.request(LBSolrClient.java:296)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:1068)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.cloud.SystemCollectionCompatTest.setupSystemCollection(SystemCollectionCompatTest.java:104)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:972)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
   

[GitHub] [lucene-solr] ctargett commented on issue #653: SOLR-13425: Wrong color in horizontal definition list

2019-04-25 Thread GitBox
ctargett commented on issue #653: SOLR-13425: Wrong color in horizontal 
definition list
URL: https://github.com/apache/lucene-solr/pull/653#issuecomment-48228
 
 
   Yes, I'd love it if you would merge to branch_8_0 - if you didn't, I was 
going to. Thanks for fixing this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: svn commit: r1858076 - /lucene/cms/trunk/templates/sidenav.mdtext

2019-04-25 Thread Uwe Schindler
Hi Jan,

 

all fine! I was not aware of this requirement. So I think we keep it the same 
also for subprojects.

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

https://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Jan Høydahl  
Sent: Thursday, April 25, 2019 1:26 PM
To: dev@lucene.apache.org
Subject: Re: svn commit: r1858076 - /lucene/cms/trunk/templates/sidenav.mdtext

 

Uwe,

 

See my previous email two days ago with title "Fix license link on project 
website", see https://www.apache.org/foundation/marks/pmcs#navigation 

 

But I think this requirement is only for the front-page of the project, so my 
edit for core (java) was perhaps not needed.

I'm also adding the five standard ASF links to the footer of the Solr sub 
project, see http://lucene.staging.apache.org/solr/

 

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com  





25. apr. 2019 kl. 10:01 skrev Uwe Schindler mailto:u...@thetaphi.de> >:

 

Hi,

why did you change the license link? 
 works perfectly fine here (maybe 
it was just a short hickup on ASF servers?). The new link just shows the 
general ASF licenses page, but as Lucene is using the 2.0 license, the link 
should go there?

Uwe

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de  




-Original Message-
From: jan...@apache.org   mailto:jan...@apache.org> >
Sent: Wednesday, April 24, 2019 11:55 PM
To: comm...@lucene.apache.org  
Subject: svn commit: r1858076 -
/lucene/cms/trunk/templates/sidenav.mdtext

Author: janhoy
Date: Wed Apr 24 21:55:26 2019
New Revision: 1858076

URL: http://svn.apache.org/viewvc?rev=1858076 
 =rev
Log:
Fix license link

Modified:
   lucene/cms/trunk/templates/sidenav.mdtext

Modified: lucene/cms/trunk/templates/sidenav.mdtext
URL:
http://svn.apache.org/viewvc/lucene/cms/trunk/templates/sidenav.mdtext?
rev=1858076=1858075=1858076=diff

==
--- lucene/cms/trunk/templates/sidenav.mdtext (original)
+++ lucene/cms/trunk/templates/sidenav.mdtext Wed Apr 24 21:55:26 2019
@@ -1,4 +1,3 @@
-
  
Download
Click to begin
@@ -19,7 +18,7 @@
  - [Open Relevance (Discontinued)](./openrelevance/)

# About
-  - [License](http://www.apache.org/licenses/LICENSE-2.0)
+  - [License](https://www.apache.org/licenses/)
  - [Who We are](./whoweare.html)

# Events




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
 
For additional commands, e-mail: dev-h...@lucene.apache.org 
 

 



[jira] [Commented] (SOLR-13427) Support simulating the execution of autoscaling suggestion

2019-04-25 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16826028#comment-16826028
 ] 

Andrzej Bialecki  commented on SOLR-13427:
--

This patch contains several changes:

* it moves the main simulation framework classes to {{solr/core}} (from tests)
* adds several improvements to the simulator so that it can be fully 
initialized from an already running instance of Solr
* adds support in {{SolrCLI autoscaling}} for running a specified number of 
iterations to simulate the effects of applying suggestions, and provides 
detailed status of intermediate states. 

TODO:
* cleanup of {{CloudTestUtils}}, since most of its methods have been moved to 
{{CloudUtil}} in solr/core.
* maybe add support for dumping a snapshot of a (real) cluster state so that it 
can be used in a later simulation run instead of the real cluster, for running 
"what would've happened if" scenarios?

> Support simulating the execution of autoscaling suggestion
> --
>
> Key: SOLR-13427
> URL: https://issues.apache.org/jira/browse/SOLR-13427
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-13427.patch
>
>
> It's not always clear what would be the final state of the cluster after 
> applying the suggested changes (obtained from {{/autoscaling/suggestions}}), 
> especially on a large and busy cluster where several autoscaling rules have 
> to be considered.
> This issue proposes to use the simulation framework for simulating the 
> effects of the suggestions.
> First, the simulator would be initialized from the current state of a real 
> cluster. Then it would run several rounds of simulated execution of 
> suggestions until there were either no more suggestions (the cluster would be 
> perfectly balanced) or the iteration count limit was reached.
> This simulation could be executed using either the deployed autoscaling 
> config or one provided by the user, which would make it easier to test the 
> effects of various configurations on the cluster layout.
> Support for this functionality would be integrated into the existing 
> {{SolrCLI autoscaling}} tool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13427) Support simulating the execution of autoscaling suggestion

2019-04-25 Thread Andrzej Bialecki (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki  updated SOLR-13427:
-
Attachment: SOLR-13427.patch

> Support simulating the execution of autoscaling suggestion
> --
>
> Key: SOLR-13427
> URL: https://issues.apache.org/jira/browse/SOLR-13427
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: SOLR-13427.patch
>
>
> It's not always clear what would be the final state of the cluster after 
> applying the suggested changes (obtained from {{/autoscaling/suggestions}}), 
> especially on a large and busy cluster where several autoscaling rules have 
> to be considered.
> This issue proposes to use the simulation framework for simulating the 
> effects of the suggestions.
> First, the simulator would be initialized from the current state of a real 
> cluster. Then it would run several rounds of simulated execution of 
> suggestions until there were either no more suggestions (the cluster would be 
> perfectly balanced) or the iteration count limit was reached.
> This simulation could be executed using either the deployed autoscaling 
> config or one provided by the user, which would make it easier to test the 
> effects of various configurations on the cluster layout.
> Support for this functionality would be integrated into the existing 
> {{SolrCLI autoscaling}} tool.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-25 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825990#comment-16825990
 ] 

Noble Paul commented on SOLR-13320:
---

{{ignoreVersionConflicts=true}} makes more sense

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13126) Multiplicative boost of isn't applied when one of the summed or multiplied queries doesn't match

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13126:
---
Fix Version/s: 7.7.2

> Multiplicative boost of isn't applied when one of the summed or multiplied 
> queries doesn't match 
> -
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.3, 7.4, 7.6, 7.7, 7.5.0, 7.7.1
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 7.7.2, 8.0
>
> Attachments: 
> 0001-use-deprecated-classes-to-fix-regression-introduced-.patch, 
> 0002-SOLR-13126-Added-test-case.patch, 2019-02-14_1715.png, SOLR-13126.patch, 
> SOLR-13126.patch, debugQuery.json, image-2019-02-13-16-17-56-272.png, 
> screenshot-1.png, solr_match_neither_nextteil_nor_sony.json, 
> solr_match_neither_nextteil_nor_sony.txt, solr_match_netzteil_and_sony.json, 
> solr_match_netzteil_and_sony.txt, solr_match_netzteil_only.json, 
> solr_match_netzteil_only.txt
>
>
> Under certain circumstances search results from queries with multiple 
> multiplicative boosts using the Solr functions {{product()}} and {{query()}} 
> result in a score that is inconsistent with the one from the debugQuery 
> information. Also only the debug score is correct while the actual search 
> results show a wrong score.
> This seems somewhat similar to the behaviour described in 
> https://issues.apache.org/jira/browse/LUCENE-7132, though this issue has been 
> resolved a while ago.
> A little background: we are using Solr as a search platform for the 
> e-commerce framework SAP Hybris. There the shop administrator can create 
> multiplicative boost rules (see below for an example) where a value like 2.0 
> means that an item gets boosted to 200%. This works fine in the demo shop 
> distributed by SAP but breaks in our shop. We encountered the issue when 
> Upgrading from Solr 7.2.1 / Hybris 6.7 to Solr 7.5 / Hybris 18.8.3 (which 
> would have been named Hybris 6.8 but the version naming schema changed).
> We reduced the Solr query generated by Hybris to the relevant parts and could 
> reproduce the issue in the Solr admin without any Hybris connection.
> I attached the JSON result of a test query but here's a description of the 
> parts that seemed most relevant to me.
> The {{responseHeader.params}} reads (slightly rearranged):
> {code:java}
> "q":"{!boost b=$ymb}(+{!lucene v=$yq})",
> "ymb":"product(query({!v=\"name_text_de\\:Netzteil\\^=2.0\"},1),query({!v=\"name_text_de\\:Sony\\^=3.0\"},1))",
> "yq":"*:*",
> "sort":"score desc",
> "debugQuery":"true",
> // Added to keep the output small but probably unrelated to the actual issue
> "fl":"score,id,code_string,name_text_de",
> "fq":"catalogId:\"someProducts\"",
> "rows":"10",
> {code}
> This example boosts the German product name (field {{name_text_de}}) in case 
> in contains certain terms:
>  * "Netzteil" (power supply) is boosted to 200%
>  * "Sony" is boosted to 300%
> Consequently a product containing both terms should be boosted to 600%.
> Also the query function has the value 1 specified as default in case the name 
> does not contain the respective term resulting in a pseudo boost that 
> preserves the score.
> According to the debug information the parser used is the LuceneQParser, 
> which translates this to the following parsed query:
> {quote}FunctionScoreQuery(FunctionScoreQuery(+*:*, scored by 
> boost(product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0)
> {quote}
> And the translated boost is:
> {quote}org.apache.lucene.queries.function.valuesource.ProductFloatFunction:product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0))
> {quote}
> When taking a look at the search result, among other the following products 
> are included (see the JSON comments for an analysis of each result):
> {code:javascript}
>  {
> "id":"someProducts/Online/test711",
> "name_text_de":"Original Sony Vaio Netzteil",
> "code_string":"test711",
> // CORRECT, both "Netzteil" and "Sony" are included in the name
> "score":6.0},
>   {
> "id":"someProducts/Online/taxTestingProductThree",
> "name_text_de":"Steuertestprodukt Zwei",
> "code_string":"taxTestingProductThree",
> // CORRECT, neither "Netzteil" nor "Sony" are included in the name
> "score":1.0},
>   {
>

[jira] [Commented] (SOLR-13126) Multiplicative boost of isn't applied when one of the summed or multiplied queries doesn't match

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825983#comment-16825983
 ] 

ASF subversion and git services commented on SOLR-13126:


Commit 53d48e21b499c321c4ebd2dc55b24565a72f6e0c in lucene-solr's branch 
refs/heads/branch_7_7 from Jan Høydahl
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=53d48e2 ]

SOLR-13126: Correctly combine multiplicative query boosts

(Cherry pick 03945a9f5932ee78bbaa46becb551f348bca509e and backport, fix CHANGES)


> Multiplicative boost of isn't applied when one of the summed or multiplied 
> queries doesn't match 
> -
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.3, 7.4, 7.6, 7.7, 7.5.0, 7.7.1
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: 
> 0001-use-deprecated-classes-to-fix-regression-introduced-.patch, 
> 0002-SOLR-13126-Added-test-case.patch, 2019-02-14_1715.png, SOLR-13126.patch, 
> SOLR-13126.patch, debugQuery.json, image-2019-02-13-16-17-56-272.png, 
> screenshot-1.png, solr_match_neither_nextteil_nor_sony.json, 
> solr_match_neither_nextteil_nor_sony.txt, solr_match_netzteil_and_sony.json, 
> solr_match_netzteil_and_sony.txt, solr_match_netzteil_only.json, 
> solr_match_netzteil_only.txt
>
>
> Under certain circumstances search results from queries with multiple 
> multiplicative boosts using the Solr functions {{product()}} and {{query()}} 
> result in a score that is inconsistent with the one from the debugQuery 
> information. Also only the debug score is correct while the actual search 
> results show a wrong score.
> This seems somewhat similar to the behaviour described in 
> https://issues.apache.org/jira/browse/LUCENE-7132, though this issue has been 
> resolved a while ago.
> A little background: we are using Solr as a search platform for the 
> e-commerce framework SAP Hybris. There the shop administrator can create 
> multiplicative boost rules (see below for an example) where a value like 2.0 
> means that an item gets boosted to 200%. This works fine in the demo shop 
> distributed by SAP but breaks in our shop. We encountered the issue when 
> Upgrading from Solr 7.2.1 / Hybris 6.7 to Solr 7.5 / Hybris 18.8.3 (which 
> would have been named Hybris 6.8 but the version naming schema changed).
> We reduced the Solr query generated by Hybris to the relevant parts and could 
> reproduce the issue in the Solr admin without any Hybris connection.
> I attached the JSON result of a test query but here's a description of the 
> parts that seemed most relevant to me.
> The {{responseHeader.params}} reads (slightly rearranged):
> {code:java}
> "q":"{!boost b=$ymb}(+{!lucene v=$yq})",
> "ymb":"product(query({!v=\"name_text_de\\:Netzteil\\^=2.0\"},1),query({!v=\"name_text_de\\:Sony\\^=3.0\"},1))",
> "yq":"*:*",
> "sort":"score desc",
> "debugQuery":"true",
> // Added to keep the output small but probably unrelated to the actual issue
> "fl":"score,id,code_string,name_text_de",
> "fq":"catalogId:\"someProducts\"",
> "rows":"10",
> {code}
> This example boosts the German product name (field {{name_text_de}}) in case 
> in contains certain terms:
>  * "Netzteil" (power supply) is boosted to 200%
>  * "Sony" is boosted to 300%
> Consequently a product containing both terms should be boosted to 600%.
> Also the query function has the value 1 specified as default in case the name 
> does not contain the respective term resulting in a pseudo boost that 
> preserves the score.
> According to the debug information the parser used is the LuceneQParser, 
> which translates this to the following parsed query:
> {quote}FunctionScoreQuery(FunctionScoreQuery(+*:*, scored by 
> boost(product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0)
> {quote}
> And the translated boost is:
> {quote}org.apache.lucene.queries.function.valuesource.ProductFloatFunction:product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0))
> {quote}
> When taking a look at the search result, among other the following products 
> are included (see the JSON comments for an analysis of each result):
> {code:javascript}
>  {
> "id":"someProducts/Online/test711",
> "name_text_de":"Original Sony Vaio Netzteil",
> "code_string":"test711",
> // 

Re: [jira] [Commented] (SOLR-11795) Add Solr metrics exporter for Prometheus

2019-04-25 Thread Koji Sekiguchi

Hi Ishan,

I'm sorry for the late reply.

I think it makes sense. But can you open a new ticket because SOLR-11795 has 
been closed already.

Thank you for letting me know this!

Koji


On 2019/04/11 19:31, Ishan Chattopadhyaya (JIRA) wrote:


 [ 
https://issues.apache.org/jira/browse/SOLR-11795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16815301#comment-16815301
 ]

Ishan Chattopadhyaya commented on SOLR-11795:
-

Does it make sense to add a link about this exporter in the metrics reporting 
page? I've attached a patch for the same. [~ctargett] / [~koji], can you please 
review?


Add Solr metrics exporter for Prometheus


 Key: SOLR-11795
 URL: https://issues.apache.org/jira/browse/SOLR-11795
 Project: Solr
  Issue Type: Improvement
  Security Level: Public(Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 7.2
Reporter: Minoru Osuka
Assignee: Koji Sekiguchi
Priority: Minor
 Fix For: 7.3, 8.0

 Attachments: SOLR-11795-10.patch, SOLR-11795-11.patch, 
SOLR-11795-2.patch, SOLR-11795-3.patch, SOLR-11795-4.patch, SOLR-11795-5.patch, 
SOLR-11795-6.patch, SOLR-11795-7.patch, SOLR-11795-8.patch, SOLR-11795-9.patch, 
SOLR-11795-dev-tools.patch, SOLR-11795-doc.patch, SOLR-11795-ref-guide.patch, 
SOLR-11795.patch, solr-dashboard.png, solr-exporter-diagram.png

  Time Spent: 20m
  Remaining Estimate: 0h

I 'd like to monitor Solr using Prometheus and Grafana.
I've already created Solr metrics exporter for Prometheus. I'd like to 
contribute to contrib directory if you don't mind.
!solr-exporter-diagram.png|thumbnail!
!solr-dashboard.png|thumbnail!




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1858076 - /lucene/cms/trunk/templates/sidenav.mdtext

2019-04-25 Thread Jan Høydahl
Uwe,

See my previous email two days ago with title "Fix license link on project 
website", see https://www.apache.org/foundation/marks/pmcs#navigation 

But I think this requirement is only for the front-page of the project, so my 
edit for core (java) was perhaps not needed.
I'm also adding the five standard ASF links to the footer of the Solr sub 
project, see http://lucene.staging.apache.org/solr/

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 25. apr. 2019 kl. 10:01 skrev Uwe Schindler :
> 
> Hi,
> 
> why did you change the license link? 
>  works perfectly fine here (maybe 
> it was just a short hickup on ASF servers?). The new link just shows the 
> general ASF licenses page, but as Lucene is using the 2.0 license, the link 
> should go there?
> 
> Uwe
> 
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
>> -Original Message-
>> From: jan...@apache.org 
>> Sent: Wednesday, April 24, 2019 11:55 PM
>> To: comm...@lucene.apache.org
>> Subject: svn commit: r1858076 -
>> /lucene/cms/trunk/templates/sidenav.mdtext
>> 
>> Author: janhoy
>> Date: Wed Apr 24 21:55:26 2019
>> New Revision: 1858076
>> 
>> URL: http://svn.apache.org/viewvc?rev=1858076=rev
>> Log:
>> Fix license link
>> 
>> Modified:
>>lucene/cms/trunk/templates/sidenav.mdtext
>> 
>> Modified: lucene/cms/trunk/templates/sidenav.mdtext
>> URL:
>> http://svn.apache.org/viewvc/lucene/cms/trunk/templates/sidenav.mdtext?
>> rev=1858076=1858075=1858076=diff
>> 
>> ==
>> --- lucene/cms/trunk/templates/sidenav.mdtext (original)
>> +++ lucene/cms/trunk/templates/sidenav.mdtext Wed Apr 24 21:55:26 2019
>> @@ -1,4 +1,3 @@
>> -
>>   
>> Download
>> Click to begin
>> @@ -19,7 +18,7 @@
>>   - [Open Relevance (Discontinued)](./openrelevance/)
>> 
>> # About
>> -  - [License](http://www.apache.org/licenses/LICENSE-2.0)
>> +  - [License](https://www.apache.org/licenses/)
>>   - [Who We are](./whoweare.html)
>> 
>> # Events
> 
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825975#comment-16825975
 ] 

Jan Høydahl commented on SOLR-12584:


[~sbillet], fantastic. Feel free to hack a documentation PR on 
[https://github.com/apache/lucene-solr/blob/master/solr/solr-ref-guide/src/monitoring-solr-with-prometheus-and-grafana.adoc]
 If you mention SOLR-12584 in the title of the PR it will automatically be 
linked with this JIRA.

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13320) add a param ignoreDuplicates=true to updates to not overwrite existing docs

2019-04-25 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825968#comment-16825968
 ] 

Shalin Shekhar Mangar commented on SOLR-13320:
--

Thanks [~dragonsinth] for explaining the use-case and the problem.

These are conflicts -- a document was not the version we wanted it to be. Here 
{{-1}} is just a special version that means the document should not have 
existed. So I think {{ignoreConflicts}} or {{ignoreVersionConflicts}} is more 
appropriate than {{ignoreDuplicates}}. Regardless of what we call the param, 
returning a list of docs IDs that were skipped would be nice to have as Gus 
noted. {{haltBatchOnError}} is definitely too broad and it is not always 
possible to recover from errors e.g. if there is malformed JSON in the middle 
of a batch.

> add a param ignoreDuplicates=true to updates to not overwrite existing docs
> ---
>
> Key: SOLR-13320
> URL: https://issues.apache.org/jira/browse/SOLR-13320
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> Updates should have an option to ignore duplicate documents and drop them if 
> an option  {{ignoreDuplicates=true}} is specified



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13414) Sql Schema is not initializing

2019-04-25 Thread David Barnett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Barnett updated SOLR-13414:
-
Attachment: solr-8983-console.log

Morning all

I had left the solr-core-7.7.1 in the WEB-INF/lib (thank you)

I now get the output with Field info -

Of 83 Fields all say "Field info is not null"

0 count for "is null"

Attached is the console-log file






-- 
*David Barnett*
O Technology Consulting Ltd
oand...@gmail.com
+44 (0) 7753 235608


> Sql Schema is not initializing
> --
>
> Key: SOLR-13414
> URL: https://issues.apache.org/jira/browse/SOLR-13414
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Parallel SQL
>Affects Versions: 7.3, 7.7.1
>Reporter: David Barnett
>Priority: Major
> Attachments: SOLR-13414.patch, before_starting_solr.png, 
> command_prompt.png, managed-schema, new_solr-8983-console.log, new_solr.log, 
> solr-8983-console.log, solr-8983-console.log, solr-core-7.8.0-SNAPSHOT.jar, 
> solr.log
>
>
> When attempting to create a JDBC sql query against a large collection (400m + 
> records) we get a null error.
> After [initial discussion in 
> solr-user|http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201904.mbox/%3C1dd6ac3b-e17b-4c29-872e-c7560504a46c%40Spark%3E]
>  I have been asked to open this ticket - The exception thrown does not 
> provide sufficient detail to understand the underlying problem. Its it 
> thought to be an issue with the schema not initialising correctly. 
> Attached is the managed-schema after a downconfig.
> Stack trace from email thread:
> *Solr Admin UI Logging*
> {code:java}
> java.io.IOException: Failed to execute sqlQuery 'select id from document 
> limit 10' against JDBC connection 'jdbc:calcitesolr:'.
> Error while executing SQL "select id from document limit 10": null
> at 
> org.apache.solr.client.solrj.io.stream.JDBCStream.open(JDBCStream.java:271)
> at 
> org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:54)
> at 
> org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:394)
> at 
> org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:78)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeMap(JsonTextWriter.java:164)
> at org.apache.solr.common.util.TextWriter.writeVal(TextWriter.java:69)
> at 
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:152)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedListAsMapWithDups(JsonTextWriter.java:386)
> at 
> org.apache.solr.common.util.JsonTextWriter.writeNamedList(JsonTextWriter.java:292)
> at 
> org.apache.solr.response.JSONWriter.writeResponse(JSONWriter.java:73)
> at 
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:66)
> at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)
> at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:788)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:525)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:395)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:341)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1588)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1345)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1557)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1247)
> at 
> 

[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825965#comment-16825965
 ] 

ASF subversion and git services commented on SOLR-13081:


Commit 6d94631538afaa85808dcd221da4835aca6b65dc in lucene-solr's branch 
refs/heads/master from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6d94631 ]

SOLR-13081: Let in-place update work with route.field


> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13126) Multiplicative boost of isn't applied when one of the summed or multiplied queries doesn't match

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13126:
---
Affects Version/s: 7.3
   7.4
   7.6
   7.7
   7.7.1

> Multiplicative boost of isn't applied when one of the summed or multiplied 
> queries doesn't match 
> -
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.3, 7.4, 7.6, 7.7, 7.5.0, 7.7.1
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: 
> 0001-use-deprecated-classes-to-fix-regression-introduced-.patch, 
> 0002-SOLR-13126-Added-test-case.patch, 2019-02-14_1715.png, SOLR-13126.patch, 
> SOLR-13126.patch, debugQuery.json, image-2019-02-13-16-17-56-272.png, 
> screenshot-1.png, solr_match_neither_nextteil_nor_sony.json, 
> solr_match_neither_nextteil_nor_sony.txt, solr_match_netzteil_and_sony.json, 
> solr_match_netzteil_and_sony.txt, solr_match_netzteil_only.json, 
> solr_match_netzteil_only.txt
>
>
> Under certain circumstances search results from queries with multiple 
> multiplicative boosts using the Solr functions {{product()}} and {{query()}} 
> result in a score that is inconsistent with the one from the debugQuery 
> information. Also only the debug score is correct while the actual search 
> results show a wrong score.
> This seems somewhat similar to the behaviour described in 
> https://issues.apache.org/jira/browse/LUCENE-7132, though this issue has been 
> resolved a while ago.
> A little background: we are using Solr as a search platform for the 
> e-commerce framework SAP Hybris. There the shop administrator can create 
> multiplicative boost rules (see below for an example) where a value like 2.0 
> means that an item gets boosted to 200%. This works fine in the demo shop 
> distributed by SAP but breaks in our shop. We encountered the issue when 
> Upgrading from Solr 7.2.1 / Hybris 6.7 to Solr 7.5 / Hybris 18.8.3 (which 
> would have been named Hybris 6.8 but the version naming schema changed).
> We reduced the Solr query generated by Hybris to the relevant parts and could 
> reproduce the issue in the Solr admin without any Hybris connection.
> I attached the JSON result of a test query but here's a description of the 
> parts that seemed most relevant to me.
> The {{responseHeader.params}} reads (slightly rearranged):
> {code:java}
> "q":"{!boost b=$ymb}(+{!lucene v=$yq})",
> "ymb":"product(query({!v=\"name_text_de\\:Netzteil\\^=2.0\"},1),query({!v=\"name_text_de\\:Sony\\^=3.0\"},1))",
> "yq":"*:*",
> "sort":"score desc",
> "debugQuery":"true",
> // Added to keep the output small but probably unrelated to the actual issue
> "fl":"score,id,code_string,name_text_de",
> "fq":"catalogId:\"someProducts\"",
> "rows":"10",
> {code}
> This example boosts the German product name (field {{name_text_de}}) in case 
> in contains certain terms:
>  * "Netzteil" (power supply) is boosted to 200%
>  * "Sony" is boosted to 300%
> Consequently a product containing both terms should be boosted to 600%.
> Also the query function has the value 1 specified as default in case the name 
> does not contain the respective term resulting in a pseudo boost that 
> preserves the score.
> According to the debug information the parser used is the LuceneQParser, 
> which translates this to the following parsed query:
> {quote}FunctionScoreQuery(FunctionScoreQuery(+*:*, scored by 
> boost(product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0)
> {quote}
> And the translated boost is:
> {quote}org.apache.lucene.queries.function.valuesource.ProductFloatFunction:product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0))
> {quote}
> When taking a look at the search result, among other the following products 
> are included (see the JSON comments for an analysis of each result):
> {code:javascript}
>  {
> "id":"someProducts/Online/test711",
> "name_text_de":"Original Sony Vaio Netzteil",
> "code_string":"test711",
> // CORRECT, both "Netzteil" and "Sony" are included in the name
> "score":6.0},
>   {
> "id":"someProducts/Online/taxTestingProductThree",
> "name_text_de":"Steuertestprodukt Zwei",
> "code_string":"taxTestingProductThree",
> 

[jira] [Commented] (SOLR-13409) Remove directory listings in Jetty config

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825962#comment-16825962
 ] 

ASF subversion and git services commented on SOLR-13409:


Commit d86d8db316d3520b08a301a46c933f7a8a785569 in lucene-solr's branch 
refs/heads/branch_7_7 from Uwe Schindler
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d86d8db ]

SOLR-13409: Disable HTML directory listings in admin interface to prevent 
possible security issues

(cherry picked from commit df27ccf01d9b89149fbba00e81c3eed078e28a95)


> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13409) Remove directory listings in Jetty config

2019-04-25 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-13409?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-13409:
---
Fix Version/s: 7.7.2

> Remove directory listings in Jetty config
> -
>
> Key: SOLR-13409
> URL: https://issues.apache.org/jira/browse/SOLR-13409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 8.0
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: 7.7.2, 8.1, master (9.0)
>
> Attachments: SOLR-13409.patch
>
>
> In the shipped Jetty configuration the directory listings are enabled, 
> although not used in the admin interface. For security reasons this should be 
> disabled.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-25 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825961#comment-16825961
 ] 

ASF subversion and git services commented on SOLR-13081:


Commit efa9d9571f56a0e013e62b993949a5fb66c2cc6f in lucene-solr's branch 
refs/heads/branch_8x from Mikhail Khludnev
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=efa9d95 ]

SOLR-13081: catching solrj exception as well in the negative test


> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13126) Multiplicative boost of isn't applied when one of the summed or multiplied queries doesn't match

2019-04-25 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-13126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825954#comment-16825954
 ] 

Jan Høydahl commented on SOLR-13126:


I plan to back-port this to 7.7.2 (branch_7_7).

> Multiplicative boost of isn't applied when one of the summed or multiplied 
> queries doesn't match 
> -
>
> Key: SOLR-13126
> URL: https://issues.apache.org/jira/browse/SOLR-13126
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 7.5.0
> Environment: Reproduced with macOS 10.14.1, a quick test with Windows 
> 10 showed the same result.
>Reporter: Thomas Aglassinger
>Assignee: Alan Woodward
>Priority: Major
> Fix For: 8.0
>
> Attachments: 
> 0001-use-deprecated-classes-to-fix-regression-introduced-.patch, 
> 0002-SOLR-13126-Added-test-case.patch, 2019-02-14_1715.png, SOLR-13126.patch, 
> SOLR-13126.patch, debugQuery.json, image-2019-02-13-16-17-56-272.png, 
> screenshot-1.png, solr_match_neither_nextteil_nor_sony.json, 
> solr_match_neither_nextteil_nor_sony.txt, solr_match_netzteil_and_sony.json, 
> solr_match_netzteil_and_sony.txt, solr_match_netzteil_only.json, 
> solr_match_netzteil_only.txt
>
>
> Under certain circumstances search results from queries with multiple 
> multiplicative boosts using the Solr functions {{product()}} and {{query()}} 
> result in a score that is inconsistent with the one from the debugQuery 
> information. Also only the debug score is correct while the actual search 
> results show a wrong score.
> This seems somewhat similar to the behaviour described in 
> https://issues.apache.org/jira/browse/LUCENE-7132, though this issue has been 
> resolved a while ago.
> A little background: we are using Solr as a search platform for the 
> e-commerce framework SAP Hybris. There the shop administrator can create 
> multiplicative boost rules (see below for an example) where a value like 2.0 
> means that an item gets boosted to 200%. This works fine in the demo shop 
> distributed by SAP but breaks in our shop. We encountered the issue when 
> Upgrading from Solr 7.2.1 / Hybris 6.7 to Solr 7.5 / Hybris 18.8.3 (which 
> would have been named Hybris 6.8 but the version naming schema changed).
> We reduced the Solr query generated by Hybris to the relevant parts and could 
> reproduce the issue in the Solr admin without any Hybris connection.
> I attached the JSON result of a test query but here's a description of the 
> parts that seemed most relevant to me.
> The {{responseHeader.params}} reads (slightly rearranged):
> {code:java}
> "q":"{!boost b=$ymb}(+{!lucene v=$yq})",
> "ymb":"product(query({!v=\"name_text_de\\:Netzteil\\^=2.0\"},1),query({!v=\"name_text_de\\:Sony\\^=3.0\"},1))",
> "yq":"*:*",
> "sort":"score desc",
> "debugQuery":"true",
> // Added to keep the output small but probably unrelated to the actual issue
> "fl":"score,id,code_string,name_text_de",
> "fq":"catalogId:\"someProducts\"",
> "rows":"10",
> {code}
> This example boosts the German product name (field {{name_text_de}}) in case 
> in contains certain terms:
>  * "Netzteil" (power supply) is boosted to 200%
>  * "Sony" is boosted to 300%
> Consequently a product containing both terms should be boosted to 600%.
> Also the query function has the value 1 specified as default in case the name 
> does not contain the respective term resulting in a pseudo boost that 
> preserves the score.
> According to the debug information the parser used is the LuceneQParser, 
> which translates this to the following parsed query:
> {quote}FunctionScoreQuery(FunctionScoreQuery(+*:*, scored by 
> boost(product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0)
> {quote}
> And the translated boost is:
> {quote}org.apache.lucene.queries.function.valuesource.ProductFloatFunction:product(query((ConstantScore(name_text_de:netzteil))^2.0,def=1.0),query((ConstantScore(name_text_de:sony))^3.0,def=1.0))
> {quote}
> When taking a look at the search result, among other the following products 
> are included (see the JSON comments for an analysis of each result):
> {code:javascript}
>  {
> "id":"someProducts/Online/test711",
> "name_text_de":"Original Sony Vaio Netzteil",
> "code_string":"test711",
> // CORRECT, both "Netzteil" and "Sony" are included in the name
> "score":6.0},
>   {
> "id":"someProducts/Online/taxTestingProductThree",
> "name_text_de":"Steuertestprodukt Zwei",
> "code_string":"taxTestingProductThree",
> // CORRECT, neither "Netzteil" nor "Sony" are included in the name
>

[jira] [Commented] (SOLR-13081) In-Place Update doesn't work when route.field is defined

2019-04-25 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825952#comment-16825952
 ] 

Mikhail Khludnev commented on SOLR-13081:
-

build broken

 https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/462/testReport/ 

org.apache.solr.update.TestInPlaceUpdateWithRouteField.testUpdatingDocValuesWithRouteField

Failing for the past 1 build (Since Unstable#462 )
Took 2.1 sec.
Error Message
No value for :shardName. Unable to identify shard
Stacktrace
org.apache.solr.common.SolrException: No value for :shardName. Unable to 
identify shard
at 
__randomizedtesting.SeedInfo.seed([B86AA14BB9B7A797:4F3291FFA8FAC8E6]:0)
at 
org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:50)
 



> In-Place Update doesn't work when route.field is defined
> 
>
> Key: SOLR-13081
> URL: https://issues.apache.org/jira/browse/SOLR-13081
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: update
>Reporter: Dr Oleg Savrasov
>Assignee: Mikhail Khludnev
>Priority: Major
> Attachments: SOLR-13081.patch, SOLR-13081.patch, SOLR-13081.patch, 
> SOLR-13081.patch, SOLR-13081.patch
>
>
> As soon as cloud collection is configured with route.field property, In-Place 
> Updates are not applied anymore. This happens because 
> AtomicUpdateDocumentMerger skips only id and version fields and doesn't 
> verify configured route.field.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-8.x-Linux (32bit/jdk1.8.0_201) - Build # 462 - Unstable!

2019-04-25 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-8.x-Linux/462/
Java: 32bit/jdk1.8.0_201 -server -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  
org.apache.solr.update.TestInPlaceUpdateWithRouteField.testUpdatingDocValuesWithRouteField

Error Message:
No value for :shardName. Unable to identify shard

Stack Trace:
org.apache.solr.common.SolrException: No value for :shardName. Unable to 
identify shard
at 
__randomizedtesting.SeedInfo.seed([B86AA14BB9B7A797:4F3291FFA8FAC8E6]:0)
at 
org.apache.solr.common.cloud.CompositeIdRouter.sliceHash(CompositeIdRouter.java:50)
at 
org.apache.solr.common.cloud.HashBasedRouter.getTargetSlice(HashBasedRouter.java:38)
at 
org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(UpdateRequest.java:264)
at 
org.apache.solr.client.solrj.request.UpdateRequest.getRoutes(UpdateRequest.java:372)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.createRoutes(CloudSolrClient.java:121)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.directUpdate(BaseCloudSolrClient.java:439)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.sendRequest(BaseCloudSolrClient.java:977)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.requestWithRetryOnStaleState(BaseCloudSolrClient.java:837)
at 
org.apache.solr.client.solrj.impl.BaseCloudSolrClient.request(BaseCloudSolrClient.java:769)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
at 
org.apache.solr.update.TestInPlaceUpdateWithRouteField.testUpdatingDocValuesWithRouteField(TestInPlaceUpdateWithRouteField.java:120)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-5970) Create collection API always has status 0

2019-04-25 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825933#comment-16825933
 ] 

Ishan Chattopadhyaya commented on SOLR-5970:


Hi [~gerlowskija], are you planning to work on this? The new response looks 
fine to me and your current patch. This will be a major annoyance solved, and 
was wondering if we can get this into Solr 8.1? In case you wouldn't have time 
for this, should I try?

We can spin up follow up issues for the remaining work (e.g. somehow 
consolidating the four places of errors/exceptions/status).

> Create collection API always has status 0
> -
>
> Key: SOLR-5970
> URL: https://issues.apache.org/jira/browse/SOLR-5970
> Project: Solr
>  Issue Type: Bug
>Reporter: Abraham Elmahrek
>Assignee: Jason Gerlowski
>Priority: Major
> Attachments: SOLR-5970-test.patch, SOLR-5970.patch, bad.jar, 
> schema.xml, solrconfig.xml
>
>
> The responses below are from a successful create collection API 
> (https://cwiki.apache.org/confluence/display/solr/Collections+API#CollectionsAPI-CreateormodifyanAliasforaCollection)
>  call and an unsuccessful create collection API call. It seems the 'status' 
> is always 0.
> Success:
> {u'responseHeader': {u'status': 0, u'QTime': 4421}, u'success': {u'': 
> {u'core': u'test1_shard1_replica1', u'responseHeader': {u'status': 0, 
> u'QTime': 3449
> Failure:
> {u'failure': 
>   {u'': 
> u"org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException:Error 
> CREATEing SolrCore 'test43_shard1_replica1': Unable to create core: 
> test43_shard1_replica1 Caused by: Could not find configName for collection 
> test43 found:[test1]"},
>  u'responseHeader': {u'status': 0, u'QTime': 17149}
> }
> It seems like the status should be 400 or something similar for an 
> unsuccessful attempt?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12248) Grouping in SolrCloud fails if indexed="false" docValues="true" and stored="false"

2019-04-25 Thread Ishan Chattopadhyaya (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825917#comment-16825917
 ] 

Ishan Chattopadhyaya commented on SOLR-12248:
-

Thanks for the patch, [~munendrasn].
Do you see any downside of using {{fieldType.toObject(schemaField, 
group.groupValue))}} as compared to 
{{schemaField.createFields(group.groupValue.utf8ToString());}}?

> Grouping in SolrCloud fails if indexed="false" docValues="true" and 
> stored="false"
> --
>
> Key: SOLR-12248
> URL: https://issues.apache.org/jira/browse/SOLR-12248
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2
>Reporter: Erick Erickson
>Assignee: Ishan Chattopadhyaya
>Priority: Minor
> Attachments: SOLR-12248.patch, SOLR-12248.patch
>
>
> In SolrCloud _only_ (it works in stand-alone mode), a field defined as:
>  indexed="false"  docValues="true"  stored="false"  />
> will fail with the following error:
> java.lang.NullPointerException
> org.apache.solr.schema.BoolField.toExternal(BoolField.java:131)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:142)
> org.apache.solr.schema.BoolField.toObject(BoolField.java:51)
> org.apache.solr.search.grouping.endresulttransformer.GroupedEndResultTransformer.transform(GroupedEndResultTransformer.java:72)
> org.apache.solr.handler.component.QueryComponent.groupedFinishStage(QueryComponent.java:830)
> org.apache.solr.handler.component.QueryComponent.finishStage(QueryComponent.java:793)
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:435)
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> .
> .
> curiously enough it succeeds with a field identically defined except for 
> stored="true"



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1831 - Still Unstable

2019-04-25 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1831/

4 tests failed.
FAILED:  
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey

Error Message:
Address already in use

Stack Trace:
java.net.BindException: Address already in use
at 
__randomizedtesting.SeedInfo.seed([115D79AC86D20649:9A7AAA7DC7D4ADCD]:0)
at java.base/sun.nio.ch.Net.bind0(Native Method)
at java.base/sun.nio.ch.Net.bind(Net.java:461)
at java.base/sun.nio.ch.Net.bind(Net.java:453)
at 
java.base/sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:227)
at 
java.base/sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:80)
at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:308)
at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:236)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.server.Server.doStart(Server.java:394)
at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.retryOnPortBindFailure(JettySolrRunner.java:558)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:497)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner.start(JettySolrRunner.java:465)
at 
org.apache.solr.cloud.api.collections.ShardSplitTest.testSplitWithChaosMonkey(ShardSplitTest.java:499)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:1082)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:1054)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 

[jira] [Commented] (SOLR-12584) Add basic auth credentials configuration to the Solr exporter for Prometheus/Grafana

2019-04-25 Thread Stefan Billet (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825886#comment-16825886
 ] 

Stefan Billet commented on SOLR-12584:
--

Hi [~janhoy]. Yes, I successfully tested it with SolrCloud 7.7.1.

> Add basic auth credentials configuration to the Solr exporter for 
> Prometheus/Grafana  
> --
>
> Key: SOLR-12584
> URL: https://issues.apache.org/jira/browse/SOLR-12584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, metrics, security
>Affects Versions: 7.3, 7.4
>Reporter: Dwane Hall
>Priority: Minor
>  Labels: authentication, metrics, security
> Attachments: lucene-solr.patch
>
>
> The Solr exporter for Prometheus/Grafana provides a useful visual layer over 
> the solr metrics api for monitoring the state of a Solr cluster. Currently 
> this can not be configured and used on a secure Solr cluster with the Basic 
> Authentication plugin enabled. The exporter does not provide a mechanism to 
> configure/pass through basic auth credentials when SolrJ requests information 
> from the metrics api endpoints and would be a useful addition for Solr users 
> running a secure Solr instance.   



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >