[JENKINS] Lucene-Solr-Tests-7.x - Build # 900 - Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/900/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate

Error Message:


Stack Trace:
java.util.ConcurrentModificationException
at 
__randomizedtesting.SeedInfo.seed([99768FC15580C402:C43E91489A46624D]:0)
at java.util.ArrayList$Itr.checkForComodification(ArrayList.java:909)
at java.util.ArrayList$Itr.next(ArrayList.java:859)
at java.util.AbstractCollection.toString(AbstractCollection.java:461)
at java.lang.String.valueOf(String.java:2994)
at java.lang.StringBuilder.append(StringBuilder.java:131)
at 
java.util.concurrent.ConcurrentHashMap.toString(ConcurrentHashMap.java:1321)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration.testSearchRate(TestSimTriggerIntegration.java:1269)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 13843 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631337#comment-16631337
 ] 

Noble Paul commented on SOLR-12798:
---


An ideal solution would be
* Be able to construct a SolrInputDocument with a binary payload + metadata 
parameters for that doc
* When this is sent to Solr, SolrJ should sent the payload+parameters in the 
body
* This ensures that the query string length is always constant
* This also helps in inter-node communication where the documents are sent 
between replicas

I'm not sure if we can achieve this without some changes at the server side 
too. Meanwhile we may need a custom HttpSolrClient implementation that can do a 
multipart request

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8493) Stop publishing .sha1 files with releases

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631268#comment-16631268
 ] 

ASF subversion and git services commented on LUCENE-8493:
-

Commit 03c9c04353ce1b5ace33fddd5bd99059e63ed507 in lucene-solr's branch 
refs/heads/jira/http2 from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=03c9c04 ]

LUCENE-8493: Stop publishing insecure .sha1 files with releases


> Stop publishing .sha1 files with releases
> -
>
> Key: LUCENE-8493
> URL: https://issues.apache.org/jira/browse/LUCENE-8493
> Project: Lucene - Core
>  Issue Type: Task
>  Components: -tools
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: build, release, security, sha1sum
> Fix For: 7.5.1, 7.6, master (8.0)
>
> Attachments: LUCENE-8493.patch
>
>
> In LUCENE-7935 we added {{.sha512}} checksums to releases and removed 
> {{.md5}} files.
> According to the Release Distribution Policy 
> ([http://www.apache.org/dev/release-distribution#sigs-and-sums)]:
> {quote}For every artifact distributed to the public through Apache channels, 
> the PMC
> MUST supply a valid OpenPGP-compatible ASCII-armored detached signature file
> MUST supply at least one checksum file
> SHOULD supply a SHA-256 and/or SHA-512 checksum file
> *SHOULD NOT supply a MD5 or SHA-1 checksum file* (because these are 
> deprecated)
> {quote}
> So this Jira will stop publishing .sha1 files, leaving only the .sha512



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12756) Refactor Assign and extract replica placement strategies out of it

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12756?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631269#comment-16631269
 ] 

ASF subversion and git services commented on SOLR-12756:


Commit c587410f99375005c680ece5e24a4dfd40d8d3eb in lucene-solr's branch 
refs/heads/jira/http2 from [~shalinmangar]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c587410 ]

SOLR-12756: Refactor Assign and extract replica placement strategies out of it.

Now, assignment is done with the help of a builder class instead of calling a 
method with large number of arguments. The number of special cases that had to 
be handled have been cut down as well.


> Refactor Assign and extract replica placement strategies out of it
> --
>
> Key: SOLR-12756
> URL: https://issues.apache.org/jira/browse/SOLR-12756
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Minor
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12756.patch, SOLR-12756.patch, SOLR-12756.patch, 
> SOLR-12756.patch
>
>
> While working on SOLR-12648, I found Assign class to be very complex. Many 
> methods have overlapping functionality, differ in side-effects and have 
> non-intuitive arguments. We should clean this up and extract replica 
> placement strategies out of that class.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631272#comment-16631272
 ] 

ASF subversion and git services commented on SOLR-5163:
---

Commit 9481c1f623b77214a2a14ad18efc59fb406ed765 in lucene-solr's branch 
refs/heads/jira/http2 from [~Charles Sanders]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9481c1f ]

SOLR-5163: edismax now throws an exception when qf refers to a nonexistent field


> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Fix For: master (8.0)
>
> Attachments: SOLR-5163.patch, SOLR-5163.patch, SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12805) Store previous term (generation) of replica when start recovery process

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631267#comment-16631267
 ] 

ASF subversion and git services commented on SOLR-12805:


Commit 667b8299e69755abfef89b3beb44cacdd292d479 in lucene-solr's branch 
refs/heads/jira/http2 from [~caomanhdat]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=667b829 ]

SOLR-12805: Store previous term (generation) of replica when start recovery 
process


> Store previous term (generation) of replica when start recovery process
> ---
>
> Key: SOLR-12805
> URL: https://issues.apache.org/jira/browse/SOLR-12805
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
> Fix For: master (8.0)
>
>
> Right now the current implementation of ZkShardTerms.startRecovering(core2) is
> from \{"core1" : 4, "core2" : 2} to \{"core1" : 4, "core2" : 4, 
> "core2_recovering" : 4}. If we change the behavior a little bit to \{"core1" 
> : 4, "core2" : 4, "core_recovering" : 2}. We will have more information about 
> the current generation of core2's index which is a very useful information 
> for leader election.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12709) Simulate a 1 bln docs scaling-up scenario

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631270#comment-16631270
 ] 

ASF subversion and git services commented on SOLR-12709:


Commit 2369c8963412773592098475bdd8af1da81e3ac5 in lucene-solr's branch 
refs/heads/jira/http2 from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2369c89 ]

SOLR-12709: Add TestSimExtremeIndexing for testing simulated large indexing 
jobs.
Several important improvements to the simulator.


> Simulate a 1 bln docs scaling-up scenario
> -
>
> Key: SOLR-12709
> URL: https://issues.apache.org/jira/browse/SOLR-12709
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12652) SolrMetricManager.overridableRegistryName should be removed; it doesn't work

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631271#comment-16631271
 ] 

ASF subversion and git services commented on SOLR-12652:


Commit 044bc2a48522cb9d1e112aa3be4f2d7e6c2ed498 in lucene-solr's branch 
refs/heads/jira/http2 from [~psomogyi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=044bc2a ]

SOLR-12652: Remove SolrMetricManager.overridableRegistryName()


> SolrMetricManager.overridableRegistryName should be removed; it doesn't work
> 
>
> Key: SOLR-12652
> URL: https://issues.apache.org/jira/browse/SOLR-12652
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.1
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12652.patch, SOLR-12652.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The {{SolrMetricManager.overridableRegistryName()}} method is a great idea 
> but unfortunately in practice I've found it doesn't really work; it seems 
> fundamentally flawed.  +I wish it could work+.  The main issue I think is 
> that the callers of SMM.registerGauge/registerMetric assumes it can place a 
> gauge/metric and have it be the only once there (force==true).  But it won't 
> be if it's shared.  
> Another problem is in at least one of the reporters -- 
> {{JmxMetricsReporter.JmxListener#registerMBean}} will get in a race condition 
> to remove an already-registered MBean but in the process of removing it, 
> it'll already get removed concurrently by some other core working on the same 
> name.  This results in {{javax.management.InstanceNotFoundException}} logged 
> as a warning; nothing serious.  But I suspect conceptually there is a problem 
> since which MBean should "win"?  Shrug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12782) UninvertingReader can be avoided if there are no fields to uninvert

2018-09-27 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631223#comment-16631223
 ] 

Lucene/Solr QA commented on SOLR-12782:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 18s{color} 
| {color:red} core in the patch failed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m  9s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | solr.cloud.autoscaling.sim.TestSimExtremeIndexing |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12782 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12941594/SOLR-12782.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 4.4.0-130-generic #156~14.04.1-Ubuntu SMP Thu 
Jun 14 13:51:47 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / 9481c1f |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on July 24 2018 |
| Default Java | 1.8.0_172 |
| unit | 
https://builds.apache.org/job/PreCommit-SOLR-Build/192/artifact/out/patch-unit-solr_core.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/192/testReport/ |
| modules | C: solr/core U: solr/core |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/192/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> UninvertingReader can be avoided if there are no fields to uninvert
> ---
>
> Key: SOLR-12782
> URL: https://issues.apache.org/jira/browse/SOLR-12782
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12782.patch, SOLR-12782.patch, SOLR-12782.patch, 
> SOLR-12782.patch
>
>
> Solr uses UninvertingReader to expose DocValues on fields that don't have 
> them, but do have indexed fields that can be inverted via the FieldCache. It 
> has an internal constructor that takes the input LeafReader and a mapping of 
> field name to UninvertingReader.Type. It builds a new FieldInfos that have 
> fields reflecting DocValues. There are two things I'd like to improve here:
>  # make this constructor private and instead insist you use a new wrap() 
> method that has the opportunity to return the input if there is nothing to 
> do. Effectively the logic today would move into this wrap method, and the 
> current constructor would be dead simple, and would take the FieldInfos.
>  # Do _not_ create a new {{FieldInfo}} object if the existing field is 
> suitable (it's DocValuesType can stay the same).  The savings here can really 
> add up on machines with many indexes & segments.  This is in fact what 
> motivated the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1550 - Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1550/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/169/consoleText

[repro] Revision: faad36d24358b283bf99109edbdbf6dfb95adf11

[repro] Repro line:  ant test  -Dtestcase=CollectionsAPISolrJTest 
-Dtests.method=testCreateCollectionWithPropertyParam 
-Dtests.seed=423C7D235A116B31 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.badapples=true -Dtests.locale=ca-ES -Dtests.timezone=Atlantic/St_Helena 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimLargeCluster 
-Dtests.method=testSearchRate -Dtests.seed=423C7D235A116B31 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=ga 
-Dtests.timezone=America/Araguaina -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeMarkersRegistration -Dtests.seed=423C7D235A116B31 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=da-DK -Dtests.timezone=Poland -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCollectionsAPIViaSolrCloudCluster 
-Dtests.method=testCollectionCreateSearchDelete -Dtests.seed=423C7D235A116B31 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sq 
-Dtests.timezone=WET -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
9481c1f623b77214a2a14ad18efc59fb406ed765
[repro] git fetch
[repro] git checkout faad36d24358b283bf99109edbdbf6dfb95adf11

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestCollectionsAPIViaSolrCloudCluster
[repro]   CollectionsAPISolrJTest
[repro]   TestSimTriggerIntegration
[repro]   TestSimLargeCluster
[repro] ant compile-test

[...truncated 3437 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestCollectionsAPIViaSolrCloudCluster|*.CollectionsAPISolrJTest|*.TestSimTriggerIntegration|*.TestSimLargeCluster"
 -Dtests.showOutput=onerror  -Dtests.seed=423C7D235A116B31 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=sq -Dtests.timezone=WET 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 124312 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.CollectionsAPISolrJTest
[repro]   0/5 failed: 
org.apache.solr.cloud.api.collections.TestCollectionsAPIViaSolrCloudCluster
[repro]   1/5 failed: org.apache.solr.cloud.autoscaling.sim.TestSimLargeCluster
[repro]   3/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-27 Thread Anshum Gupta (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631219#comment-16631219
 ] 

Anshum Gupta commented on SOLR-12502:
-

My thoughts echo what [~tomasflobbe] thinks and I feel like we should not keep 
our APIs in a flux, something that we have done in the past. I am not opposed 
to changes but if at all we do that I would want to be sure that we 1. ensure 
back-compat and 2. make sure the path we're trying to move on is a long term 
thing.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2826 - Still Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2826/

3 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:41556/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:39772/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:41556/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:39772/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([C91757E36D30553F:63DA8411DAE380EF]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Michael Schumann (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631211#comment-16631211
 ] 

Michael Schumann commented on SOLR-12798:
-

I wanted to chime in here because we have run into the problem of the body of 
large POST requests getting encoded in the URL in a different scenario and it 
would be nice if there was a solution for this. To work around the problem we 
have had to copy and modify Solr classes.

Our use case is not a common one: we sometimes make query requests to a custom 
handler with a very large number of integer values encoded into a 
RoaringBitMap. On the client side it is not a big problem, we created a 
subclass of {{HttpSolrClient.Builder}} that set {{UseMultiPartPost}} to true. 
This is passed in to the {{LBHttpSolrClient}} which in turn is passed into 
{{CloudSolrClient}}.

The problem that was harder to solve was in the {{HttpShardHandler}} on the 
Solr nodes, which ends up encoding the parameters in the URL. The work around 
we came up with was to duplicate and modify {{HttpShardHandler}} so we could 
again set {{UseMultiPartPost}} to true. We also had to subclass 
{{HttpShardHandlerFactory}} and {{HttpSolrClient.Builder.}}

It would be great if there was a way to force the request both on the Solrj 
client side and in the requests made between the nodes to use multipart 
requests.

 

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22935 - Still Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22935/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments

Error Message:
expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([4AC4241F31AC3CB4:8F0D32EDE9CC6603]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.lucene.index.TestIndexWriter.testSoftUpdateDocuments(TestIndexWriter.java:3148)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 624 lines...]
   [junit4] Suite: org.apache.lucene.index.TestIndexWriter
   [junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestIndexWriter 
-Dtests.method=testSoftUpdateDocuments -Dtests.seed=4AC4241F31AC3CB4 
-Dtests.multiplier=3 -Dtests.slow=true -Dtests.locale=ak-GH 
-Dtests.timezone=Pacific/Guam -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1
   [junit4] FAILURE 0.04s J2 | TestIndexWriter.testSoftUpdateDocuments <<<
   [junit4]> Throwable #1: 

[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 855 - Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/855/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica

Error Message:
Timeout waiting for collection to become active Live Nodes: 
[127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr] Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/17)={   
"replicationFactor":"1",   "pullReplicas":"0",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"1",   "tlogReplicas":"0",   
"autoCreated":"true",   "policy":"c1",   "shards":{"shard1":{   
"replicas":{"core_node1":{   
"core":"testCreateCollectionAddReplica_shard1_replica_n1",   
"SEARCHER.searcher.maxDoc":0,   "SEARCHER.searcher.deletedDocs":0,  
 "INDEX.sizeInBytes":10240,   "node_name":"127.0.0.1:10003_solr",   
"state":"active",   "type":"NRT",   
"INDEX.sizeInGB":9.5367431640625E-6,   "SEARCHER.searcher.numDocs":0}}, 
  "range":"8000-7fff",   "state":"active"}}}

Stack Trace:
java.lang.AssertionError: Timeout waiting for collection to become active
Live Nodes: [127.0.0.1:10001_solr, 127.0.0.1:10004_solr, 127.0.0.1:1_solr, 
127.0.0.1:10002_solr, 127.0.0.1:10003_solr]
Last available state: 
DocCollection(testCreateCollectionAddReplica//clusterstate.json/17)={
  "replicationFactor":"1",
  "pullReplicas":"0",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"1",
  "tlogReplicas":"0",
  "autoCreated":"true",
  "policy":"c1",
  "shards":{"shard1":{
  "replicas":{"core_node1":{
  "core":"testCreateCollectionAddReplica_shard1_replica_n1",
  "SEARCHER.searcher.maxDoc":0,
  "SEARCHER.searcher.deletedDocs":0,
  "INDEX.sizeInBytes":10240,
  "node_name":"127.0.0.1:10003_solr",
  "state":"active",
  "type":"NRT",
  "INDEX.sizeInGB":9.5367431640625E-6,
  "SEARCHER.searcher.numDocs":0}},
  "range":"8000-7fff",
  "state":"active"}}}
at 
__randomizedtesting.SeedInfo.seed([BC4075AAAE57EE35:3C601084BF140693]:0)
at 
org.apache.solr.cloud.CloudTestUtils.waitForState(CloudTestUtils.java:70)
at 
org.apache.solr.cloud.autoscaling.sim.TestSimPolicyCloud.testCreateCollectionAddReplica(TestSimPolicyCloud.java:123)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:110)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631149#comment-16631149
 ] 

Karl Wright commented on SOLR-12798:


[~janhoy]:

{quote}
That would be for case 1) where  you don't do Tika stuff on the MCF side but 
want Solr to handle the binary stream. In this case there should be no problem 
with huge metadata request params. And I agree that SolrJ should support this 
case (ContentStreamUpdateRequest?).
{quote}

Ok.  At the moment that sort of request seems to be transmitted with standard 
POST with metadata stuffed into the URL.  So a fix is needed for that.

{code}
I got confused by your other use case where you parse the file with Tika on the 
MCF side and still sent the text to /extract
{code}

While Julien has a custom Solr handler, that's not what we typically do, and we 
recommend that already-Tika-extracted content and metadata be sent to the 
/update handler.  In that case, we build a SolrInputDocument from the content 
stream, and add it into an UpdateRequest.  This mode of usage also seems to use 
standard POST or even PUT, and it puts all the metadata parameters on the URL.  
This is transmitted to the /update handler.  Do you want to support the case 
where the metadata parameters are sizable enough that the URL exceeds 8192 
bytes?






> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Julien Massiera (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631127#comment-16631127
 ] 

Julien Massiera commented on SOLR-12798:


[~janhoy],

Your proposal could resolve the long URL problem , but how would you create a 
Solr document (XML, JSON or CSV cause if I am not wrong, these are the only 
three formats that the update handler of Solr can manage) based on some 
metadata and a content file (which in my case is pure text) without having to 
entirely read the content file to inject it to the Solr document ? 
I think it will have hudge performance impact when one have to crawl millions 
of documents if not billions 
The Solr Output connector of MCF is currently just constructing a simple POST 
request with document metadata as parameters and the content file as stream. 
Your solution will add a significant step before sending the document. Am I 
wrong ? 

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631107#comment-16631107
 ] 

Jan Høydahl commented on SOLR-12798:


{quote}How do you suggest we handle binary data that is meant for SolrCell?
{quote}
That would be for case 1) where  you don't do Tika stuff on the MCF side but 
want Solr to handle the binary stream. In this case there should be no problem 
with huge metadata request params. And I agree that SolrJ should support this 
case ({{ContentStreamUpdateRequest}}?). I got confused by your other use case 
where you parse the file with Tika on the MCF side and still sent the text to 
/extract.

As I understand, this Jira issue is really mainly about the classic use case 
where you do NOT invoke Tika on client side but stream binary content to 
SolrCell and still need some Url parameters, and doing this in SolrJ is broken 
somehow. In this case there will NOT be huge metadata to pass as URL 
parameters, right?

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-master - Build # 164 - Still Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/164/

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple

Error Message:
IOException occured when talking to server at: https://127.0.0.1:34819/solr

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:34819/solr
at 
__randomizedtesting.SeedInfo.seed([A1CA53D588AA5369:9979772BAF5987B8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:657)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.autoscaling.AutoAddReplicasPlanActionTest.testSimple(AutoAddReplicasPlanActionTest.java:111)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
   

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631084#comment-16631084
 ] 

Karl Wright commented on SOLR-12798:


[~janhoy], if you didn't mean that the metadata and content should be sent in 
the content body, then I'm completely missing what your suggestion is.

{quote}
My cURL examples were just to discus what "metadata" might mean in this context.
{quote}

Repositories that are crawled by ManifoldCF have documents that are represented 
as follows:
- A long content stream, binary
- N pairs of name/value data, called metadata, which is fielded data associated 
with the document

If the metadata is extracted in a ManifoldCF pipeline from the content stream, 
it's done via Tika, from a binary stream, which changes the binary content 
stream to a simple text stream, and also supplies more metadata generated as a 
result of the extraction.  In other words, your JSON example is not like 
anything we do at all at this time.

If you want this translated into CURL, you can do it one of two ways:
(1) Put the metadata onto the URL as & parameters, e.g. 
name1=value1=value2 etc, or
(2) Send the metadata as sections in a multipart post.  This too can be set up 
in CURL if you want me to propose an example.  Each section in a multipart post 
has a name, and you can thus transmit a section for every metadata name/value 
pair, as well as one for the content part (which has its own name, that is in 
fact used by SolrCell for metadata of its own.)

Hope this helps.


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631059#comment-16631059
 ] 

Karl Wright edited comment on SOLR-12798 at 9/27/18 9:31 PM:
-

[~janhoy], so your suggestion is to use JSON format for the body, and put the 
metadata into that.  How do you suggest we handle binary data that is meant for 
SolrCell?  Encoding the binary in a JSON document is possible but in practice 
this is quite verbose, yielding 3 or 4 bytes to one.  Is that nevertheless your 
official suggestion?

Also, how do you force SolrJ to transmit the right mime type to Solr, as well 
as the document name field (which SolrCell cares about), if you use JSON 
encoding?  I assume that you have to signal this somehow?  The code seems to 
get the mime type from the Request, but it's not set anywhere by the user, so I 
presume this is either set by default or there is some way to set it?



was (Author: kwri...@metacarta.com):
[~janhoy], so your suggestion is to use JSON format for the body, and put the 
metadata into that.  How do you suggest we handle binary data that is meant for 
SolrCell?  Encoding the binary in a JSON document is possible but in practice 
this is quite verbose, yielding 3 or 4 bytes to one.  Is that nevertheless your 
official suggestion?


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631066#comment-16631066
 ] 

Jan Høydahl commented on SOLR-12798:


{quote}so your suggestion is to use JSON format
{quote}
Not at all. My cURL examples were just to discus what "metadata" might mean in 
this context. In a pure type-2) case where Tika runs in MCF one would construct 
documents with all metadata as fields in those documents. So I still don't 
understand why/how you'd get those long URLs at all in this scenario, since all 
the content goes into the streamed body. But I have not tested this streaming 
fashion use of SolrJ myself, I have just compiled in-memory SolrInputDocuments 
as usual and understand that you want to be memory efficient here and stream 
those docs as far as possible.

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, SOLR-12798-reproducer.patch, no params in url.png, 
> solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12761) Be able to configure “maxExpansions” for FuzzyQuery

2018-09-27 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631061#comment-16631061
 ] 

David Smiley commented on SOLR-12761:
-

I can see that point of view.  Note solrconfig.xml changes have more complexity 
overall due to how to change them via managed APIs and perhaps other 
considerations.  And it's yet another thing to be documented if both are 
supported.  Any way, patches welcome!

> Be able to configure “maxExpansions” for FuzzyQuery
> ---
>
> Key: SOLR-12761
> URL: https://issues.apache.org/jira/browse/SOLR-12761
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: query parsers
>Affects Versions: 7.3
>Reporter: Manuel Gübeli
>Priority: Minor
>
> We had an issue where we reached the expansion limit of the FuzzyQuery.
> Situation:
>  * Query «meier~» found «Meier»
>  * Query «mazer~» found «Meier»
>  * Query «maxer~» found «Meier»
>  * Query «mayer~» did *NOT* find «Meier»
> The parameter “maxBooleanClauses” does not help in this situation since the 
> “maxExpansions” FuzzyQuery of is never set in Solr and therefore the default 
> value of 50 is used. Details: “SolrQuery-ParserBase” calles the default 
> constructor new FuzzyQuery(Term term, int maxEdits, int pre-fixLength) and 
> therefore FuzzyQuery run always with the default values defaultMaxExpansions 
> = 50 and defaultTranspositions = true)
> Suggestion expose FuzzyQuery parameters in solrconfig.xm like e.g. 
>  1024
>  
> Addtion would be:
>  0
>  50
>  true



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631059#comment-16631059
 ] 

Karl Wright commented on SOLR-12798:


[~janhoy], so your suggestion is to use JSON format for the body, and put the 
metadata into that.  How do you suggest we handle binary data that is meant for 
SolrCell?  Encoding the binary in a JSON document is possible but in practice 
this is quite verbose, yielding 3 or 4 bytes to one.  Is that nevertheless your 
official suggestion?


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, no params in url.png, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631057#comment-16631057
 ] 

Karl Wright commented on SOLR-12798:


[~mkhludnev], your walkthrough in the code is fine but (a) when we use 
ContentStreamUpdateHandler in the manner you describe to the update/extract 
handler, we still wind up going through the contentWriter clause above where 
you stop, and (b) when we use UpdateHandler in the manner you describe we also 
go through that same path.  In fact I could find no way to send the content 
through any other path with the code as it exists in master right now, because 
in our usage there's always a contentWriter and the check for its presence 
excludes all else that happens after that.  So I don't understand where the 
disconnect is.  Perhaps if you attach the exact code you are testing we can 
resolve this.



> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, no params in url.png, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631041#comment-16631041
 ] 

Mikhail Khludnev commented on SOLR-12798:
-

Karl, fwiw {{SolrExampleTests.testMultiContentStreamRequest()}} bypasses the 
code path you pointed me on. I still not fully understand, but why don't pass 
all it needs via {{ContentStreamUpdateRequest.addFile()}} and {{.setParam()}} 
instead of {{ContentWriter}}? I've checked that long {{wparams}} encoded and 
passed as a separate part keeping URL short. 
 !no params in url.png!

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, no params in url.png, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Mikhail Khludnev (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Khludnev updated SOLR-12798:

Attachment: no params in url.png

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, no params in url.png, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1547 - Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1547/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1650/consoleText

[repro] Revision: a6d39ba859eb81c9359ff9ae1f1683cfd70169b3

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=CdcrBidirectionalTest 
-Dtests.method=testBiDir -Dtests.seed=BE088EB3B0980532 -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-DZ -Dtests.timezone=America/Jujuy -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=BE088EB3B0980532 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=nl -Dtests.timezone=Canada/Atlantic -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
9481c1f623b77214a2a14ad18efc59fb406ed765
[repro] git fetch
[repro] git checkout a6d39ba859eb81c9359ff9ae1f1683cfd70169b3

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   MoveReplicaHDFSTest
[repro]   CdcrBidirectionalTest
[repro] ant compile-test

[...truncated 3424 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.MoveReplicaHDFSTest|*.CdcrBidirectionalTest" 
-Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=BE088EB3B0980532 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=nl -Dtests.timezone=Canada/Atlantic -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 32751 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   1/5 failed: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
[repro]   2/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro] git checkout 9481c1f623b77214a2a14ad18efc59fb406ed765

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-12502) Unify and reduce the number of SolrClient#add methods

2018-09-27 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16631003#comment-16631003
 ] 

David Smiley commented on SOLR-12502:
-

{quote}If we deprecated all the methods that don't take a collection, and 
interpret a "null" value in that parameter in the same way as the removed 
method, that would get rid of half the methods.
{quote}
Perhaps that's the simplest thing and mostly addresses the pain-point of 
accidentally forgetting to specify the collection.  It's still possible but at 
least "null" is explicit.

My idea on SolrClient.updateReq() builder is still valid; perhaps a separate 
issue.

> Unify and reduce the number of SolrClient#add methods
> -
>
> Key: SOLR-12502
> URL: https://issues.apache.org/jira/browse/SOLR-12502
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Reporter: Varun Thacker
>Priority: Major
>
> On SOLR-11654 we noticed that SolrClient#add has 10 overloaded methods which 
> can be very confusing to new users.
> Also the UpdateRequest class is public so that means if a user is looking for 
> a custom combination they can always choose to do so by writing a couple of 
> lines of code.
> For 8.0 which might not be very far away we can improve this situation
>  
> Quoting David from SOLR-11654
> {quote}Any way I guess we'll leave SolrClient alone.  Thanks for your input 
> Varun.  Yes it's a shame there are so many darned overloaded methods... I 
> think it's a large part due to the optional "collection" parameter which like 
> doubles the methods!  I've been bitten several times writing SolrJ code that 
> doesn't use the right overloaded version (forgot to specify collection).  I 
> think for 8.0, *either* all SolrClient methods without "collection" can be 
> removed in favor of insisting you use the overloaded variant accepting a 
> collection, *or* SolrClient itself could be locked down to one collection at 
> the time you create it *or* have a CollectionSolrClient interface retrieved 
> from a SolrClient.withCollection(collection) in which all the operations that 
> require a SolrClient are on that interface and not SolrClient proper.  
> Several ideas to consider.
> {quote}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-BadApples-Tests-7.x - Build # 169 - Still Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-7.x/169/

4 tests failed.
FAILED:  
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollectionWithPropertyParam

Error Message:
Could not load collection from ZK: solrj_test_core_props

Stack Trace:
org.apache.solr.common.SolrException: Could not load collection from ZK: 
solrj_test_core_props
at 
__randomizedtesting.SeedInfo.seed([423C7D235A116B31:44A70FD2DECFAB68]:0)
at 
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1316)
at 
org.apache.solr.common.cloud.ZkStateReader$LazyCollectionRef.get(ZkStateReader.java:732)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:148)
at 
org.apache.solr.common.cloud.ClusterState.getCollectionOrNull(ClusterState.java:131)
at 
org.apache.solr.common.cloud.ClusterState.getCollection(ClusterState.java:117)
at 
org.apache.solr.cloud.SolrCloudTestCase.getCollectionState(SolrCloudTestCase.java:258)
at 
org.apache.solr.cloud.CollectionsAPISolrJTest.testCreateCollectionWithPropertyParam(CollectionsAPISolrJTest.java:326)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630999#comment-16630999
 ] 

Jan Høydahl commented on SOLR-12798:


If by "metadata" you mean the {{=value}} http parameters that 
the ExtractingRequestHandler expects, then why would you send those on a normal 
update request containing a SolrInputDocument with all fields embedded?

I.e. instead of this (which does not even make sense since JSON update handler 
does not support literal param)
{code:java}
curl -XPOST 
http://localhost:8983/solr/foo/update?literal.id=1=George=Hello{code}
you post all metadata as fields in the body:
{code:java}
curl -XPOST http://localhost:8983/solr/foo/update -H "Content-type: 
application/json" -d '[{"id":1", "author":"George", "title":"Hello"}]'{code}

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12782) UninvertingReader can be avoided if there are no fields to uninvert

2018-09-27 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630982#comment-16630982
 ] 

David Smiley commented on SOLR-12782:
-

The latest patch simply adds a test that we re-use the same FieldInfo to 
{{org.apache.solr.uninverting.TestUninvertingReader#testFieldInfos}}

I plan to commit soon as I think it's ready.

> UninvertingReader can be avoided if there are no fields to uninvert
> ---
>
> Key: SOLR-12782
> URL: https://issues.apache.org/jira/browse/SOLR-12782
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12782.patch, SOLR-12782.patch, SOLR-12782.patch, 
> SOLR-12782.patch
>
>
> Solr uses UninvertingReader to expose DocValues on fields that don't have 
> them, but do have indexed fields that can be inverted via the FieldCache. It 
> has an internal constructor that takes the input LeafReader and a mapping of 
> field name to UninvertingReader.Type. It builds a new FieldInfos that have 
> fields reflecting DocValues. There are two things I'd like to improve here:
>  # make this constructor private and instead insist you use a new wrap() 
> method that has the opportunity to return the input if there is nothing to 
> do. Effectively the logic today would move into this wrap method, and the 
> current constructor would be dead simple, and would take the FieldInfos.
>  # Do _not_ create a new {{FieldInfo}} object if the existing field is 
> suitable (it's DocValuesType can stay the same).  The savings here can really 
> add up on machines with many indexes & segments.  This is in fact what 
> motivated the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12782) UninvertingReader can be avoided if there are no fields to uninvert

2018-09-27 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-12782:

Attachment: SOLR-12782.patch

> UninvertingReader can be avoided if there are no fields to uninvert
> ---
>
> Key: SOLR-12782
> URL: https://issues.apache.org/jira/browse/SOLR-12782
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: SOLR-12782.patch, SOLR-12782.patch, SOLR-12782.patch, 
> SOLR-12782.patch
>
>
> Solr uses UninvertingReader to expose DocValues on fields that don't have 
> them, but do have indexed fields that can be inverted via the FieldCache. It 
> has an internal constructor that takes the input LeafReader and a mapping of 
> field name to UninvertingReader.Type. It builds a new FieldInfos that have 
> fields reflecting DocValues. There are two things I'd like to improve here:
>  # make this constructor private and instead insist you use a new wrap() 
> method that has the opportunity to return the input if there is nothing to 
> do. Effectively the logic today would move into this wrap method, and the 
> current constructor would be dead simple, and would take the FieldInfos.
>  # Do _not_ create a new {{FieldInfo}} object if the existing field is 
> suitable (it's DocValuesType can stay the same).  The savings here can really 
> add up on machines with many indexes & segments.  This is in fact what 
> motivated the patch.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22934 - Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22934/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:33867/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:45705/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:33867/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:45705/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([F54E23191FA6F233:5F83F0EBA87527E3]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630978#comment-16630978
 ] 

Jan Høydahl commented on SOLR-12814:


Think I found the place where the huge GET request is created
{code:java}
First MetricsHistoryHandler wants to collect global metrics:

collectGlobalMetrics:481, MetricsHistoryHandler ->
getReplicaInfo:169, SolrClientNodeStateProvider ->
fetchReplicaMetrics:186, SolrClientNodeStateProvider ->
fetchReplicaMetrics:195, SolrClientNodeStateProvider

  params.add("key", metricsKeyVsTag.keySet().toArray(new String[0])); <--- This 
keyset is huge and overruns the limit
...
  SimpleSolrResponse rsp = ctx.invoke(solrNode, CommonParams.METRICS_PATH, 
params); <--- Invoke the request
...
  GenericSolrRequest request = new GenericSolrRequest(SolrRequest.METHOD.GET, 
path, params); <--- Which uses get{code}
[~ab] and [~noble.paul] you may know this part of the code. Below is screenshot 
from debug session where you see {{SolrClientNodeStateProvider}} class trying 
to invoke a huge request with 150 metric keys:

!screenshot-debug.png|width=900!

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png, screenshot-debug.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-12814:
---
Attachment: screenshot-debug.png

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png, screenshot-debug.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [elevated] document transformer doesn't work with 7.4 version

2018-09-27 Thread David Smiley
Please submit a bug report.  Generally, Solr ought not to throw an NPE.

On Thu, Sep 27, 2018 at 10:13 AM Georgy Khotyan  wrote:

> Hello. I've found the problem:
>
> Request with fl=[elevated] returns NullPointerException when Solr 7.4/7.5
> used. It works with all older versions.
>
> Example:
> http://localhost:8983/solr/my-core/select?q=*:*=true=1,2,3=true=[elevated]
>
> Is it a bug of 7.4 version?
>
> Exception:
>
> { "error":{ "trace":"java.lang.NullPointerException\n\tat
> org.apache.solr.response.transform.BaseEditorialTransformer.getKey(BaseEditorialTransformer.java:72)\n\tat
> org.apache.solr.response.transform.BaseEditorialTransformer.transform(BaseEditorialTransformer.java:52)\n\tat
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat
> org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat
> org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
> org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
> org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
> org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
> org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
> org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)\n\tat
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806)\n\tat
> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535)\n\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
> org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
> org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
> java.lang.Thread.run(Thread.java:748)\n", "code":500}}
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[GitHub] lucene-solr issue #436: SOLR-12652: Remove SolrMetricManager.overridableRegi...

2018-09-27 Thread petersomogyi
Github user petersomogyi commented on the issue:

https://github.com/apache/lucene-solr/pull/436
  
SOLR-12652 was committed to master. Thanks for reviewing!


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #436: SOLR-12652: Remove SolrMetricManager.overrida...

2018-09-27 Thread petersomogyi
Github user petersomogyi closed the pull request at:

https://github.com/apache/lucene-solr/pull/436


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-5163) edismax should throw exception when qf refers to nonexistent field

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-5163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630966#comment-16630966
 ] 

ASF subversion and git services commented on SOLR-5163:
---

Commit 9481c1f623b77214a2a14ad18efc59fb406ed765 in lucene-solr's branch 
refs/heads/master from [~Charles Sanders]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9481c1f ]

SOLR-5163: edismax now throws an exception when qf refers to a nonexistent field


> edismax should throw exception when qf refers to nonexistent field
> --
>
> Key: SOLR-5163
> URL: https://issues.apache.org/jira/browse/SOLR-5163
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers, search
>Affects Versions: 4.10.4
>Reporter: Steven Bower
>Assignee: David Smiley
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-5163.patch, SOLR-5163.patch, SOLR-5163.patch
>
>
> query:
> q=foo AND bar
> qf=field1
> qf=field2
> defType=edismax
> Where field1 exists and field2 doesn't..
> will treat the AND as a term vs and operator



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1135 - Failure

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1135/

No tests ran.

Build Log:
[...truncated 23269 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2430 links (1982 relative) to 3172 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:


[jira] [Assigned] (SOLR-12652) SolrMetricManager.overridableRegistryName should be removed; it doesn't work

2018-09-27 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-12652:
---

   Resolution: Fixed
 Assignee: David Smiley
Fix Version/s: master (8.0)

Thanks Peter!

> SolrMetricManager.overridableRegistryName should be removed; it doesn't work
> 
>
> Key: SOLR-12652
> URL: https://issues.apache.org/jira/browse/SOLR-12652
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.1
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12652.patch, SOLR-12652.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{SolrMetricManager.overridableRegistryName()}} method is a great idea 
> but unfortunately in practice I've found it doesn't really work; it seems 
> fundamentally flawed.  +I wish it could work+.  The main issue I think is 
> that the callers of SMM.registerGauge/registerMetric assumes it can place a 
> gauge/metric and have it be the only once there (force==true).  But it won't 
> be if it's shared.  
> Another problem is in at least one of the reporters -- 
> {{JmxMetricsReporter.JmxListener#registerMBean}} will get in a race condition 
> to remove an already-registered MBean but in the process of removing it, 
> it'll already get removed concurrently by some other core working on the same 
> name.  This results in {{javax.management.InstanceNotFoundException}} logged 
> as a warning; nothing serious.  But I suspect conceptually there is a problem 
> since which MBean should "win"?  Shrug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12652) SolrMetricManager.overridableRegistryName should be removed; it doesn't work

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630958#comment-16630958
 ] 

ASF subversion and git services commented on SOLR-12652:


Commit 044bc2a48522cb9d1e112aa3be4f2d7e6c2ed498 in lucene-solr's branch 
refs/heads/master from [~psomogyi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=044bc2a ]

SOLR-12652: Remove SolrMetricManager.overridableRegistryName()


> SolrMetricManager.overridableRegistryName should be removed; it doesn't work
> 
>
> Key: SOLR-12652
> URL: https://issues.apache.org/jira/browse/SOLR-12652
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Affects Versions: 7.1
>Reporter: David Smiley
>Priority: Minor
> Fix For: master (8.0)
>
> Attachments: SOLR-12652.patch, SOLR-12652.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The {{SolrMetricManager.overridableRegistryName()}} method is a great idea 
> but unfortunately in practice I've found it doesn't really work; it seems 
> fundamentally flawed.  +I wish it could work+.  The main issue I think is 
> that the callers of SMM.registerGauge/registerMetric assumes it can place a 
> gauge/metric and have it be the only once there (force==true).  But it won't 
> be if it's shared.  
> Another problem is in at least one of the reporters -- 
> {{JmxMetricsReporter.JmxListener#registerMBean}} will get in a race condition 
> to remove an already-registered MBean but in the process of removing it, 
> it'll already get removed concurrently by some other core working on the same 
> name.  This results in {{javax.management.InstanceNotFoundException}} logged 
> as a warning; nothing serious.  But I suspect conceptually there is a problem 
> since which MBean should "win"?  Shrug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630953#comment-16630953
 ] 

Shawn Heisey commented on SOLR-12814:
-

The developer console is showing all the requests made to create the page as 
GET requests, none as POST.  I stopped and restarted the example ... and it is 
still working.  How's that for frustrating?

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630945#comment-16630945
 ] 

Shawn Heisey commented on SOLR-12814:
-

OK, now I'm REALLY confused.

After accessing the URL with the IP address, I tried the localhost URL again, 
and it worked.  And then I added another 50 collections, and it's STILL working.

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630938#comment-16630938
 ] 

Shawn Heisey commented on SOLR-12814:
-

Now here's something interesting.

As I mentioned, I used the cloud example, so everything's on localhost.  
Windows 10 Professional, 64-bit.

If I use "localhost" to access the UI, the Nodes tab doesn't work.  But if I 
use the IP address of my machine, then it DOES work!  That's really strange.

 !screencapture-nodes-actual-IP.png!

Here's something showing the collections I created:

 !screencapture-cloud-graph.png! 

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Windows (64bit/jdk-12-ea+12) - Build # 806 - Still Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/806/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseG1GC

12 tests failed.
FAILED:  
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence

Error Message:
Software caused connection abort: recv failed

Stack Trace:
javax.net.ssl.SSLProtocolException: Software caused connection abort: recv 
failed
at 
__randomizedtesting.SeedInfo.seed([4F6E86DB9741A786:3A04F50528BDDEC3]:0)
at java.base/sun.security.ssl.Alert.createSSLException(Alert.java:126)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:321)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:264)
at 
java.base/sun.security.ssl.TransportContext.fatal(TransportContext.java:259)
at 
java.base/sun.security.ssl.SSLSocketImpl.handleException(SSLSocketImpl.java:1314)
at 
java.base/sun.security.ssl.SSLSocketImpl$AppInputStream.read(SSLSocketImpl.java:839)
at 
org.apache.http.impl.io.SessionInputBufferImpl.streamRead(SessionInputBufferImpl.java:137)
at 
org.apache.http.impl.io.SessionInputBufferImpl.fillBuffer(SessionInputBufferImpl.java:153)
at 
org.apache.http.impl.io.SessionInputBufferImpl.readLine(SessionInputBufferImpl.java:282)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:138)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:56)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:259)
at 
org.apache.http.impl.DefaultBHttpClientConnection.receiveResponseHeader(DefaultBHttpClientConnection.java:163)
at 
org.apache.http.impl.conn.CPoolProxy.receiveResponseHeader(CPoolProxy.java:165)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:273)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:125)
at 
org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at 
org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
at 
org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:111)
at 
org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at 
org.apache.solr.util.RestTestHarness.getResponse(RestTestHarness.java:215)
at org.apache.solr.util.RestTestHarness.query(RestTestHarness.java:107)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:226)
at org.apache.solr.util.RestTestBase.assertJQ(RestTestBase.java:192)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.doWrapperModelPersistenceChecks(TestModelManagerPersistence.java:202)
at 
org.apache.solr.ltr.store.rest.TestModelManagerPersistence.testWrapperModelPersistence(TestModelManagerPersistence.java:255)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 

[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12814:

Attachment: screencapture-cloud-graph.png

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-cloud-graph.png, 
> screencapture-nodes-actual-IP.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shawn Heisey updated SOLR-12814:

Attachment: screencapture-nodes-actual-IP.png

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt, screencapture-nodes-actual-IP.png
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.

2018-09-27 Thread Ryan Rockenbaugh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630869#comment-16630869
 ] 

Ryan Rockenbaugh commented on SOLR-11556:
-

Thanks Varun!

> Backup/Restore with multiple BackupRepository objects defined results in the 
> wrong repo being used.
> ---
>
> Key: SOLR-11556
> URL: https://issues.apache.org/jira/browse/SOLR-11556
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Attachments: SOLR-11556.patch
>
>
> I defined two repos for backup/restore, one local and one remote on GCS, e.g.
> {code}
> 
>  class="org.apache.solr.core.backup.repository.HdfsBackupRepository" 
> default="false">
>  ...
> 
>  class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" 
> default="false">
>   /tmp/solr-backups
> 
>  
> {code}
> Since the CollectionHandler does not pass the "repository" param along, once 
> the BackupCmd gets the ZkNodeProps, it selects the wrong repo! 
> The error I'm seeing is:
> {code}
> 2017-10-26 17:07:27.326 ERROR 
> (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: 
> backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not 
> installed
> at java.nio.file.Paths.get(Paths.java:147)
> at 
> org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Notice the Local backup repo is being selected in the BackupCmd even though I 
> passed repository=hdfs in my backup command, e.g.
> {code}
> curl 
> "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs;
> {code} 
> I think the fix here is to include the repository param, see patch. I'll fix 
> for the next 7.x release and those on 6 can just apply the patch here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12767) Deprecate min_rf parameter and always include the achieved rf in the response

2018-09-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-12767:
-
Attachment: SOLR-12767.patch

> Deprecate min_rf parameter and always include the achieved rf in the response
> -
>
> Key: SOLR-12767
> URL: https://issues.apache.org/jira/browse/SOLR-12767
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Assignee: Tomás Fernández Löbbe
>Priority: Major
> Attachments: SOLR-12767.patch, SOLR-12767.patch, SOLR-12767.patch, 
> SOLR-12767.patch, SOLR-12767.patch
>
>
> Currently the {{min_rf}} parameter does two things.
> 1. It tells Solr that the user wants to keep track of the achieved 
> replication factor
> 2. (undocumented AFAICT) It prevents Solr from putting replicas in recovery 
> if the achieved replication factor is lower than the {{min_rf}} specified
> #2 is intentional and I believe the reason behind it is to prevent replicas 
> to go into recovery in cases of short hiccups (since the assumption is that 
> the user is going to retry the request anyway). This is dangerous because if 
> the user doesn’t retry (or retries a number of times but keeps failing) the 
> replicas will be permanently inconsistent. Also, since we now retry updates 
> from leaders to replicas, this behavior has less value, since short temporary 
> blips should be recovered by those retries anyway. 
> I think we should remove the behavior described in #2, #1 is still valuable, 
> but there isn’t much point of making the parameter an integer, the user is 
> just telling Solr that they want the achieved replication factor, so it could 
> be a boolean, but I’m thinking we probably don’t even want to expose the 
> parameter, and just always keep track of it, and include it in the response. 
> It’s not costly to calculate, so why keep two separate code paths?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630851#comment-16630851
 ] 

Karl Wright commented on SOLR-12798:


Please examine the following code from master HttpSolrClient.java:

{code}
  if(contentWriter != null) {
String fullQueryUrl = url + wparams.toQueryString();
HttpEntityEnclosingRequestBase postOrPut = SolrRequest.METHOD.POST == 
request.getMethod() ?new HttpPost(fullQueryUrl) : new HttpPut(fullQueryUrl);
postOrPut.addHeader("Content-Type",
contentWriter.getContentType());
postOrPut.setEntity(new BasicHttpEntity(){
  @Override
  public boolean isStreaming() {
return true;
  }

  @Override
  public void writeTo(OutputStream outstream) throws IOException {
contentWriter.write(outstream);
  }
});
return postOrPut;

  } else if (streams == null || isMultipart) {
{code}

The request is formed by taking all the parameters in wparams (which include 
the metadata fields AFAICT) and putting them into the URL:

{code}
HttpEntityEnclosingRequestBase postOrPut = SolrRequest.METHOD.POST == 
request.getMethod() ?new HttpPost(fullQueryUrl) : new HttpPut(fullQueryUrl);
{code}

There is no other way in the SolrJ request handling code for PUT and POST 
requests to transmit metadata to Solr.  

Indeed, right now, both documents added to an UpdateRequest, as well as 
documents that are specified via ContentStreamUpdateRequest, go by this route.  
We did verify that using the 7.5.0 version of SolrJ and completely removing all 
ManifoldCF custom code led to documents that would exceed the maximum URL 
length if their metadata was long enough.


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.

2018-09-27 Thread Varun Thacker (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630835#comment-16630835
 ] 

Varun Thacker commented on SOLR-11556:
--

Hi Ryan,

No promises but I'll plan on tackling this within the next couple of weeks.

> Backup/Restore with multiple BackupRepository objects defined results in the 
> wrong repo being used.
> ---
>
> Key: SOLR-11556
> URL: https://issues.apache.org/jira/browse/SOLR-11556
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Attachments: SOLR-11556.patch
>
>
> I defined two repos for backup/restore, one local and one remote on GCS, e.g.
> {code}
> 
>  class="org.apache.solr.core.backup.repository.HdfsBackupRepository" 
> default="false">
>  ...
> 
>  class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" 
> default="false">
>   /tmp/solr-backups
> 
>  
> {code}
> Since the CollectionHandler does not pass the "repository" param along, once 
> the BackupCmd gets the ZkNodeProps, it selects the wrong repo! 
> The error I'm seeing is:
> {code}
> 2017-10-26 17:07:27.326 ERROR 
> (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: 
> backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not 
> installed
> at java.nio.file.Paths.get(Paths.java:147)
> at 
> org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Notice the Local backup repo is being selected in the BackupCmd even though I 
> passed repository=hdfs in my backup command, e.g.
> {code}
> curl 
> "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs;
> {code} 
> I think the fix here is to include the repository param, see patch. I'll fix 
> for the next 7.x release and those on 6 can just apply the patch here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11556) Backup/Restore with multiple BackupRepository objects defined results in the wrong repo being used.

2018-09-27 Thread Ryan Rockenbaugh (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630832#comment-16630832
 ] 

Ryan Rockenbaugh commented on SOLR-11556:
-

It looks like this patch has not be integrated into the 7.x branch yet.  Any 
way to submit a request?

> Backup/Restore with multiple BackupRepository objects defined results in the 
> wrong repo being used.
> ---
>
> Key: SOLR-11556
> URL: https://issues.apache.org/jira/browse/SOLR-11556
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 6.3
>Reporter: Timothy Potter
>Assignee: Timothy Potter
>Priority: Major
> Attachments: SOLR-11556.patch
>
>
> I defined two repos for backup/restore, one local and one remote on GCS, e.g.
> {code}
> 
>  class="org.apache.solr.core.backup.repository.HdfsBackupRepository" 
> default="false">
>  ...
> 
>  class="org.apache.solr.core.backup.repository.LocalFileSystemRepository" 
> default="false">
>   /tmp/solr-backups
> 
>  
> {code}
> Since the CollectionHandler does not pass the "repository" param along, once 
> the BackupCmd gets the ZkNodeProps, it selects the wrong repo! 
> The error I'm seeing is:
> {code}
> 2017-10-26 17:07:27.326 ERROR 
> (OverseerThreadFactory-19-thread-1-processing-n:host:8983_solr) [   ] 
> o.a.s.c.OverseerCollectionMessageHandler Collection: product operation: 
> backup failed:java.nio.file.FileSystemNotFoundException: Provider "gs" not 
> installed
> at java.nio.file.Paths.get(Paths.java:147)
> at 
> org.apache.solr.core.backup.repository.LocalFileSystemRepository.resolve(LocalFileSystemRepository.java:82)
> at org.apache.solr.cloud.BackupCmd.call(BackupCmd.java:99)
> at 
> org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:224)
> at 
> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:463)
> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:748)
> {code}
> Notice the Local backup repo is being selected in the BackupCmd even though I 
> passed repository=hdfs in my backup command, e.g.
> {code}
> curl 
> "http://localhost:8983/solr/admin/collections?action=BACKUP=foo=foo=gs://tjp-solr-test/backups=hdfs;
> {code} 
> I think the fix here is to include the repository param, see patch. I'll fix 
> for the next 7.x release and those on 6 can just apply the patch here.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630804#comment-16630804
 ] 

matthew medway commented on SOLR-12814:
---

Amazing! looking forward to your next update thanks!

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2818 - Still Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2818/
Java: 64bit/jdk-12-ea+12 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

35 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest

Error Message:
14 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) 
Thread[id=1858, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) 
Thread[id=1860, 
name=TEST-StreamDecoratorTest.testExecutorStream-seed#[3DFF6DB087B56923]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)3) 
Thread[id=533, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[3DFF6DB087B56923]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
app//org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:502)4) 
Thread[id=534, name=zkConnectionManagerCallback-249-thread-1, state=WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)5) 
Thread[id=531, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)6) 
Thread[id=538, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)7) 
Thread[id=1873, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)8) 
Thread[id=532, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[3DFF6DB087B56923]-SendThread(127.0.0.1:36109),
 state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)9) 
Thread[id=1865, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)   10) 
Thread[id=1859, 
name=TEST-StreamDecoratorTest.testExecutorStream-seed#[3DFF6DB087B56923]-SendThread(127.0.0.1:36109),
 state=TIMED_WAITING, 

[jira] [Commented] (SOLR-11644) RealTimeGet not working when router.field is not an uniqeKey

2018-09-27 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630756#comment-16630756
 ] 

Erick Erickson commented on SOLR-11644:
---

Ran across this looking for something else.

[~yo...@apache.org] This seems like a misunderstanding of the interplay between 
 and routing in general, WDYT? Should this just be closed as invalid?

Routing on one field (with duplicate values) and having a  be a 
different field then expecting RTG to find the document seems "fraught".

> RealTimeGet not working when router.field is not an uniqeKey
> 
>
> Key: SOLR-11644
> URL: https://issues.apache.org/jira/browse/SOLR-11644
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.2, 7.1
>Reporter: Jarek Mazgaj
>Priority: Major
>
> I have a schema with following fields:
> {code:java}
> 
> 
> 
> candidate_id
> {code}
> A collection was created with following parameters:
> * numShards=4
> * replicationFactor=2
> * *router.field=company_id*
> When I try to do a Real Time Get with no routing information:
> {code:java}
> /get?id=1044101665
> {code}
> I get an empty response.
> When I try to add routing information (search returns document for these 
> values):
> {code:java}
> /get?id=1044101665&_route_=77493783
> {code}
> I get an error:
> {code}
> org.apache.solr.common.SolrException: Can't find shard 'applicants_shard7'
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.sliceToShards(RealTimeGetComponent.java:888)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.createSubRequests(RealTimeGetComponent.java:835)
>   at 
> org.apache.solr.handler.component.RealTimeGetComponent.distributedProcess(RealTimeGetComponent.java:791)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:345)
>   at 
> org.apache.solr.handler.RealTimeGetHandler.handleRequestBody(RealTimeGetHandler.java:46)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2484)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:720)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:526)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> 

[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630746#comment-16630746
 ] 

Jan Høydahl commented on SOLR-12814:


Definitely looks like requests from nodes page. Will have a look if we can 
explicitly do POST for these. The initial request only specifies prefix params 
but somehow they seem to be translated into a longer list of key params 
somewhere?

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Skipping BadApple stuff this week

2018-09-27 Thread Erick Erickson
Managed to drop the ball this week. Will try again next week.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12709) Simulate a 1 bln docs scaling-up scenario

2018-09-27 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630717#comment-16630717
 ] 

ASF subversion and git services commented on SOLR-12709:


Commit 2369c8963412773592098475bdd8af1da81e3ac5 in lucene-solr's branch 
refs/heads/master from Andrzej Bialecki
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2369c89 ]

SOLR-12709: Add TestSimExtremeIndexing for testing simulated large indexing 
jobs.
Several important improvements to the simulator.


> Simulate a 1 bln docs scaling-up scenario
> -
>
> Key: SOLR-12709
> URL: https://issues.apache.org/jira/browse/SOLR-12709
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12807) out of memory error due to a lot of zk watchers in solr cloud

2018-09-27 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12807?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630684#comment-16630684
 ] 

Erick Erickson commented on SOLR-12807:
---

I should have stated that I don't know that SOLR-10420 is your problem. But if 
you could test one of the fixed versions and verify and report back (and 
perhaps close this ticket if that Jira does fix this issue) that'd be helpful.

> out of memory error due to a lot of zk watchers in solr cloud 
> --
>
> Key: SOLR-12807
> URL: https://issues.apache.org/jira/browse/SOLR-12807
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 6.1
>Reporter: Mine Orange
>Priority: Major
>
> Analyzing the dump file,we found a lot of watchers in childWatches of 
> ZKWatchManager,nearly 1.8G,the znode of childWatches is 
> /overseer/collection-queue-work,confirm that it is not because of the 
> frequent use of collection API and the network is normal. 
> The instance is the overseer leader of a solr cluster and did not restart for 
> more than a year,suspect that the watchers grow with time.
> Our solr version is 6.1 and zookeeper version is 3.4.9.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10199) Solr's Kerberos functionality does not work in Java9 due to dependency on hadoop's AuthenticationFilter which attempt access to JVM protected classes

2018-09-27 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630673#comment-16630673
 ] 

Erick Erickson commented on SOLR-10199:
---

What version of the JDK we upgrade to and when is an open question.

> Solr's Kerberos functionality does not work in Java9 due to dependency on 
> hadoop's AuthenticationFilter which attempt access to JVM protected classes
> -
>
> Key: SOLR-10199
> URL: https://issues.apache.org/jira/browse/SOLR-10199
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Priority: Major
>  Labels: Java9
>
> (discovered this while working on test improvements for SOLR-8052)
> Our Kerberos based authn/authz features are all built on top of Hadoop's 
> {{AuthenticationFilter}} which in turn uses Hadoop's {{KerberosUtil}} -- but 
> this does not work on Java9/jigsaw JVMs because that class in turn attempts 
> to access {{sun.security.jgss.GSSUtil}} which is not exported by {{module 
> java.security.jgss}}
> This means that Solr users who depend on Kerberos will not be able to upgrade 
> to Java9, even if they do not use any Hadoop specific features of Solr.
> 
> Example log messages...
> {noformat}
>[junit4]   2> 6833 WARN  (qtp442059499-30) [] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: 
> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]   2> 6841 WARN  
> (TEST-TestSolrCloudWithKerberosAlt.testBasics-seed#[95A583AF82D1EBBE]) [] 
> o.a.h.c.p.ResponseProcessCookies Invalid cookie header: "Set-Cookie: 
> hadoop.auth=; Path=/; Domain=127.0.0.1; Expires=Ara, 01-Sa-1970 00:00:00 GMT; 
> HttpOnly". Invalid 'expires' attribute: Ara, 01-Sa-1970 00:00:00 GMT
> {noformat}
> (NOTE: HADOOP-14115 is cause of malformed cookie expiration)
> ultimately the client gets a 403 error (as seen in a testcase with patch from 
> SOLR-8052 applied and java9 assume commented out)...
> {noformat}
>[junit4] ERROR   7.10s | TestSolrCloudWithKerberosAlt.testBasics <<<
>[junit4]> Throwable #1: 
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://127.0.0.1:34687/solr: Expected mime type 
> application/octet-stream but got text/html. 
>[junit4]> 
>[junit4]>  content="text/html;charset=ISO-8859-1"/>
>[junit4]> Error 403 
>[junit4]> 
>[junit4]> 
>[junit4]> HTTP ERROR: 403
>[junit4]> Problem accessing /solr/admin/collections. Reason:
>[junit4]> java.lang.IllegalAccessException: class 
> org.apache.hadoop.security.authentication.util.KerberosUtil cannot access 
> class sun.security.jgss.GSSUtil (in module java.security.jgss) because module 
> java.security.jgss does not export sun.security.jgss to unnamed module 
> @4b38fe8b
>[junit4]> http://eclipse.org/jetty;>Powered by Jetty:// 
> 9.3.14.v20161028
>[junit4]> 
>[junit4]> 
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12759) Disable ExtractingRequestHandlerTest on JDK 11

2018-09-27 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630674#comment-16630674
 ] 

Erick Erickson commented on SOLR-12759:
---

What version of the JDK we upgrade to and when is an open question.

> Disable ExtractingRequestHandlerTest on JDK 11
> --
>
> Key: SOLR-12759
> URL: https://issues.apache.org/jira/browse/SOLR-12759
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: contrib - Solr Cell (Tika extraction)
> Environment: JDK 11 and Tika 1.x
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 7.6
>
>
> ExtractingRequestHandlerTest has failed on a JDK 11 RC due to two conspiring 
> problems: (A) Tika 1.x sometimes calls Date.toString() when extracting 
> metadata (unreleased 2.x will fix this), (B) JDK 11 RC has a bug in some 
> locales like Arabic in which a Date.toString() will have a timezone offset 
> using its locale's characters for the digits instead of using EN_US.  
> I'll add an "assume" check so we don't see failures about this.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630665#comment-16630665
 ] 

matthew medway edited comment on SOLR-12814 at 9/27/18 4:05 PM:


Hrmm. The nodes page is still working this time, maybe its because i had the 
bigger headers set earlier today.  I am now getting the error about URI being 
too large again:

cat /var/solr/logs/solr.log | grep -i WARN -B 10 -A 10
{code:java}
--
2018-09-27 15:58:49.887 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system params={wt=json&_=1538063896110} status=0 
QTime=7
2018-09-27 15:58:49.902 INFO (qtp534906248-19) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :list with params action=LIST=json&_=1538063896110 
and sendToOCPQueue=true
2018-09-27 15:58:49.902 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=LIST=json&_=1538063896110} status=0 QTime=0
2018-09-27 15:58:49.927 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=CLUSTERSTATUS=json&_=1538063896110} status=0 QTime=41
2018-09-27 15:58:49.977 INFO (qtp534906248-21) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system 
params={wt=javabin=2&_=1538063896110} status=0 QTime=31
2018-09-27 15:58:49.978 INFO (qtp534906248-14) [ ] o.a.s.h.a.AdminHandlersProxy 
Fetched response from 3 nodes: [172.19.1.223:8983_solr, 172.19.7.118:8983_solr, 
172.19.0.107:8983_solr]
2018-09-27 15:58:49.978 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system 
params={nodes=172.19.0.107:8983_solr,172.19.1.223:8983_solr,172.19.7.118:8983_solr=json&_=1538063896110}
 status=0 QTime=36
2018-09-27 15:58:49.996 INFO (qtp534906248-18) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs,org.eclipse.jetty.server.handler.DefaultHandler.get-requests,INDEX.sizeInBytes,SEARCHER.searcher.numDocs,SEARCHER.searcher.deletedDocs,SEARCHER.searcher.warmupTime=javabin=2&_=1538063896206}
 status=0 QTime=38
2018-09-27 15:58:49.998 INFO (qtp534906248-19) [ ] o.a.s.h.a.AdminHandlersProxy 
Fetched response from 3 nodes: [172.19.1.223:8983_solr, 172.19.7.118:8983_solr, 
172.19.0.107:8983_solr]
2018-09-27 15:58:49.998 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={nodes=172.19.0.107:8983_solr,172.19.1.223:8983_solr,172.19.7.118:8983_solr=CONTAINER.fs,org.eclipse.jetty.server.handler.DefaultHandler.get-requests,INDEX.sizeInBytes,SEARCHER.searcher.numDocs,SEARCHER.searcher.deletedDocs,SEARCHER.searcher.warmupTime=json&_=1538063896206}
 status=0 QTime=54
2018-09-27 15:59:03.747 WARN (qtp534906248-12) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 15:59:03.753 INFO (qtp534906248-17) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=1
2018-09-27 16:00:02.130 INFO (qtp534906248-12) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=311
2018-09-27 16:00:02.206 INFO (qtp534906248-21) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=388
2018-09-27 16:00:02.230 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=404
2018-09-27 16:00:03.856 WARN (qtp534906248-12) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 16:00:03.859 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 16:00:19.599 INFO (qtp534906248-12) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores params={indexInfo=false=json&_=1538063985822} 
status=0 QTime=0
2018-09-27 16:00:19.601 INFO (qtp534906248-17) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :clusterstatus with params 
action=CLUSTERSTATUS=json&_=1538063985823 and sendToOCPQueue=true
2018-09-27 16:00:19.612 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system params={wt=json&_=1538063985823} status=0 
QTime=12
2018-09-27 16:00:19.624 INFO (qtp534906248-18) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :list with params action=LIST=json&_=1538063985823 
and sendToOCPQueue=true
2018-09-27 16:00:19.625 INFO (qtp534906248-18) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=LIST=json&_=1538063985823} status=0 QTime=0
2018-09-27 16:00:19.659 INFO (qtp534906248-17) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null 

[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630665#comment-16630665
 ] 

matthew medway commented on SOLR-12814:
---

Hrmm. The nodes page is still working this time, maybe its because i had the 
bigger headers set earlier today.  I am now getting the error about URI being 
too large again:



{code:java}
--
2018-09-27 15:58:49.887 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system params={wt=json&_=1538063896110} status=0 
QTime=7
2018-09-27 15:58:49.902 INFO (qtp534906248-19) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :list with params action=LIST=json&_=1538063896110 
and sendToOCPQueue=true
2018-09-27 15:58:49.902 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=LIST=json&_=1538063896110} status=0 QTime=0
2018-09-27 15:58:49.927 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=CLUSTERSTATUS=json&_=1538063896110} status=0 QTime=41
2018-09-27 15:58:49.977 INFO (qtp534906248-21) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system 
params={wt=javabin=2&_=1538063896110} status=0 QTime=31
2018-09-27 15:58:49.978 INFO (qtp534906248-14) [ ] o.a.s.h.a.AdminHandlersProxy 
Fetched response from 3 nodes: [172.19.1.223:8983_solr, 172.19.7.118:8983_solr, 
172.19.0.107:8983_solr]
2018-09-27 15:58:49.978 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system 
params={nodes=172.19.0.107:8983_solr,172.19.1.223:8983_solr,172.19.7.118:8983_solr=json&_=1538063896110}
 status=0 QTime=36
2018-09-27 15:58:49.996 INFO (qtp534906248-18) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={prefix=CONTAINER.fs,org.eclipse.jetty.server.handler.DefaultHandler.get-requests,INDEX.sizeInBytes,SEARCHER.searcher.numDocs,SEARCHER.searcher.deletedDocs,SEARCHER.searcher.warmupTime=javabin=2&_=1538063896206}
 status=0 QTime=38
2018-09-27 15:58:49.998 INFO (qtp534906248-19) [ ] o.a.s.h.a.AdminHandlersProxy 
Fetched response from 3 nodes: [172.19.1.223:8983_solr, 172.19.7.118:8983_solr, 
172.19.0.107:8983_solr]
2018-09-27 15:58:49.998 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={nodes=172.19.0.107:8983_solr,172.19.1.223:8983_solr,172.19.7.118:8983_solr=CONTAINER.fs,org.eclipse.jetty.server.handler.DefaultHandler.get-requests,INDEX.sizeInBytes,SEARCHER.searcher.numDocs,SEARCHER.searcher.deletedDocs,SEARCHER.searcher.warmupTime=json&_=1538063896206}
 status=0 QTime=54
2018-09-27 15:59:03.747 WARN (qtp534906248-12) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 15:59:03.753 INFO (qtp534906248-17) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=1
2018-09-27 16:00:02.130 INFO (qtp534906248-12) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=311
2018-09-27 16:00:02.206 INFO (qtp534906248-21) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=388
2018-09-27 16:00:02.230 INFO (qtp534906248-19) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics params={} status=0 QTime=404
2018-09-27 16:00:03.856 WARN (qtp534906248-12) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 16:00:03.859 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 16:00:19.599 INFO (qtp534906248-12) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/cores params={indexInfo=false=json&_=1538063985822} 
status=0 QTime=0
2018-09-27 16:00:19.601 INFO (qtp534906248-17) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :clusterstatus with params 
action=CLUSTERSTATUS=json&_=1538063985823 and sendToOCPQueue=true
2018-09-27 16:00:19.612 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/info/system params={wt=json&_=1538063985823} status=0 
QTime=12
2018-09-27 16:00:19.624 INFO (qtp534906248-18) [ ] o.a.s.h.a.CollectionsHandler 
Invoked Collection Action :list with params action=LIST=json&_=1538063985823 
and sendToOCPQueue=true
2018-09-27 16:00:19.625 INFO (qtp534906248-18) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=LIST=json&_=1538063985823} status=0 QTime=0
2018-09-27 16:00:19.659 INFO (qtp534906248-17) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/collections 
params={action=CLUSTERSTATUS=json&_=1538063985823} status=0 QTime=58
2018-09-27 16:00:19.702 INFO 

[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630655#comment-16630655
 ] 

Jan Høydahl commented on SOLR-12814:


Hmm, can you detail what stats are wrong in nodes page? Screenshot? I did a 
quick test with one node and 50 empty collections and all looks normal. Perhaps 
in a distributed cluster things are different?

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630626#comment-16630626
 ] 

matthew medway edited comment on SOLR-12814 at 9/27/18 3:42 PM:


yea, without doing anything i can run "tail -f /var/solr/logs/solr.log" and 
just watch for the error to happen. 

I should clarify that the metrics api works fine but the nodes page is not 
showing stats correctly just like [~elyograg] said


was (Author: mmedway):
yea, without doing anything i can run "tail -f /var/solr/logs/solr.log" and 
just watch for the error to happen. 

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630626#comment-16630626
 ] 

matthew medway commented on SOLR-12814:
---

yea, without doing anything i can run "tail -f /var/solr/logs/solr.log" and 
just watch for the error to happen. 

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Mikhail Khludnev (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630625#comment-16630625
 ] 

Mikhail Khludnev commented on SOLR-12798:
-

I'm trying to understand what's problem. Giving that the challenge is to send a 
huge file in body and long param. I took the test: 

[https://github.com/apache/lucene-solr/blob/c587410f99375005c680ece5e24a4dfd40d8d3eb/solr/solrj/src/test/org/apache/solr/client/solrj/SolrExampleTests.java#L675]

added long param into:

{{up.setParam(CommonParams.HEADER_ECHO_PARAMS, 
CommonParams.EchoParamStyle.ALL.toString());}}
{{ { // added long param}}
{{ StringBuilder sb = new StringBuilder();}}
{{ for(int i=0; i<1000; i++) {}}
{{ sb.append((char)('a'+((char)(i%26;}}
{{ }}}
{{ String longparam = sb.toString();}}
{{ //System.out.println(longparam.length());}}
{{ up.setParam("b", longparam);}}
{{ }}}
{{ up.setAction(AbstractUpdateRequest.ACTION.COMMIT, true, true);}}

 

Then I run SolrExampleJettyTest and it passed. Is it possible if Manifold 
request by the same way? 

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630621#comment-16630621
 ] 

Jan Høydahl commented on SOLR-12814:


I wonder if this is the MetricsHistoryHandler that issues that request? If you 
simply sit waiting for a minute without invoking the Cloud->Nodes tab at all, 
you'll get things like this in the log:
{noformat}
2018-09-27 15:32:40.180 DEBUG (MetricsHistoryHandler-12-thread-1) [   ] 
o.a.h.headers http-outgoing-46 >> GET 

[JENKINS] Lucene-Solr-Tests-master - Build # 2825 - Still Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2825/

4 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:37001/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:43728/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:37001/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:43728/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([825C31FE9678C1A1:2891E20C21AB1471]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Julien Massiera (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630605#comment-16630605
 ] 

Julien Massiera commented on SOLR-12798:


[~janhoy], considering the discussion thread, I don't think that having us send 
you what we do will convince you that we do it the proper way. I think it would 
be more helpful for us if you show us the SolrJ code that you envision in order 
to create a Solr document with some content and some metadata, and stream it to 
Solr via POST method. 

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 1544 - Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/1544/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-master/163/consoleText

[repro] Revision: 03c9c04353ce1b5ace33fddd5bd99059e63ed507

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerTest 
-Dtests.method=testTrigger -Dtests.seed=F0F3342719DA2AD9 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=pt 
-Dtests.timezone=Australia/Perth -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=SearchRateTriggerIntegrationTest 
-Dtests.method=testBelowSearchRate -Dtests.seed=F0F3342719DA2AD9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=nl-BE -Dtests.timezone=America/Scoresbysund -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=MoveReplicaHDFSTest 
-Dtests.method=testFailedMove -Dtests.seed=F0F3342719DA2AD9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true -Dtests.locale=lv 
-Dtests.timezone=America/Havana -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestSimTriggerIntegration 
-Dtests.method=testNodeAddedTrigger -Dtests.seed=F0F3342719DA2AD9 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.badapples=true 
-Dtests.locale=de-GR -Dtests.timezone=Europe/Monaco -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
c587410f99375005c680ece5e24a4dfd40d8d3eb
[repro] git fetch
[repro] git checkout 03c9c04353ce1b5ace33fddd5bd99059e63ed507

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestSimTriggerIntegration
[repro]   SearchRateTriggerTest
[repro]   SearchRateTriggerIntegrationTest
[repro]   MoveReplicaHDFSTest
[repro] ant compile-test

[...truncated 3424 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.TestSimTriggerIntegration|*.SearchRateTriggerTest|*.SearchRateTriggerIntegrationTest|*.MoveReplicaHDFSTest"
 -Dtests.showOutput=onerror  -Dtests.seed=F0F3342719DA2AD9 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.badapples=true -Dtests.locale=de-GR 
-Dtests.timezone=Europe/Monaco -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 52156 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.SearchRateTriggerIntegrationTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.SearchRateTriggerTest
[repro]   0/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestSimTriggerIntegration
[repro]   4/5 failed: org.apache.solr.cloud.MoveReplicaHDFSTest
[repro] git checkout c587410f99375005c680ece5e24a4dfd40d8d3eb

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7036) Faster method for group.facet

2018-09-27 Thread Erick Erickson (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-7036?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630575#comment-16630575
 ] 

Erick Erickson commented on SOLR-7036:
--

Here's how to read these JIRAs:

If the status is "fixed", then the "fix version" is accurate. So in this case 
this was first fixed in 6.4 and 7.0. Every releases subsequent to those will 
contain the fix, so 6.4.x, 6.5.x, 6.6.x 7.0.1, 7.x etc all have this fix. 
Including 7.2.1

> Faster method for group.facet
> -
>
> Key: SOLR-7036
> URL: https://issues.apache.org/jira/browse/SOLR-7036
> Project: Solr
>  Issue Type: Improvement
>  Components: faceting
>Affects Versions: 4.10.3
>Reporter: Jim Musil
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 6.4, 7.0
>
> Attachments: SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, 
> SOLR-7036.patch, SOLR-7036.patch, SOLR-7036.patch, SOLR-7036_zipped.zip, 
> jstack-output.txt, performance.txt, source_for_patch.zip
>
>
> This is a patch that speeds up the performance of requests made with 
> group.facet=true. The original code that collects and counts unique facet 
> values for each group does not use the same improved field cache methods that 
> have been added for normal faceting in recent versions.
> Specifically, this approach leverages the UninvertedField class which 
> provides a much faster way to look up docs that contain a term. I've also 
> added a simple grouping map so that when a term is found for a doc, it can 
> quickly look up the group to which it belongs.
> Group faceting was very slow for our data set and when the number of docs or 
> terms was high, the latency spiked to multiple second requests. This solution 
> provides better overall performance -- from an average of 54ms to 32ms. It 
> also dropped our slowest performing queries way down -- from 6012ms to 991ms.
> I also added a few tests.
> I added an additional parameter so that you can choose to use this method or 
> the original. Add group.facet.method=fc to use the improved method or 
> group.facet.method=original which is the default if not specified.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7398) Nested Span Queries are buggy

2018-09-27 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630529#comment-16630529
 ] 

Michael Gibney edited comment on LUCENE-7398 at 9/27/18 3:01 PM:
-

I have a branch containing a candidate fix for this issue: 
[LUCENE-7398/master|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/master]

It includes support for complete graph-based matching, configurable to include:
 # all valid top-level {{startPosition}} s
 # all valid match lengths (in the {{startPosition - endPosition}} sense)
 # all valid match {{width}} s (in the slop sense)
 # all redundant matches (different {{Term}} s, same {{startPosition}}, 
{{endPosition}}, and {{width}})
 # all possible valid combinations of subclause positions

Option 1 is appropriate for top-level matching and document matching (and is 
complete for that use case); options 2/3 may be used in subclauses to guarantee 
complete matching of parent {{Spans}}; option 4 results in very thorough 
scoring. Option 5 would be an unusual use case; but I think there are some 
applications for full combinatoric matching, and the option was well supported 
by the implementation, so it is included for the sake of completeness.

The candidate implementation models the match graph as a kind of 2-dimensional 
queue that supports random-access seek and arbitrary node removal. A more 
thorough explanation would be unwieldy in a comment, so I wrote [three 
posts|https://michaelgibney.net/lucene/graph/], which respectively:
 # [Provide some 
background|https://michaelgibney.net/2018/09/lucene-graph-queries-1/] on the 
problem associated with LUCENE-7398 (this post is heavily informed by the 
discussion on this issue)
 # [Describe the candidate 
implementation|https://michaelgibney.net/2018/09/lucene-graph-queries-2/] in 
some detail (also includes information on how to configure/test/evaluate)
 # [Anticipate some possible 
consequences/applications|https://michaelgibney.net/2018/09/lucene-graph-queries-3/]
 of new functionality that would be enabled by this (or other equivalent) fix

Some notes:
 # The branch contains (and passes) all tests proposed so far in association 
with this issue (and also quite a few additional tests)
 # The candidate implementation is made more complete and performant by the 
addition of some extra information in the index (e.g., {{positionLength}}). 
This extra information is currently stored using {{Payload}} s, though for 
{{positionLength}} at least, there has been some discussion of integrating it 
more directly in the index (see LUCENE-4312, LUCENE-3843)
 # Some version of this code has been running in production for several months, 
and has given no indication of instability, even running every user phrase 
query (both explicit and {{pf}}) as a graph query.
 # To facilitate evaluation, the fix is integrated in 
[master|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/master], 
[branch_7x|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7x], 
[branch_7_5|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_5],
 and 
[branch_7_4|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_4].


was (Author: mgibney):
I have a branch containing a candidate fix for this issue: 
[LUCENE-7398/master|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/master]

It includes support for complete graph-based matching, configurable to include:
 # all valid top-level {{startPosition}} s
 # all valid match lengths (in the {{startPosition - endPosition}} sense)
 # all valid match {{width}} s (in the slop sense)
 # all redundant matches (different {{Term}} s, same {{startPosition}}, 
{{endPosition}}, and {{width}})
 # all possible valid combinations of subclause positions

Option 1 is appropriate for top-level matching and document matching (and is 
complete for that use case); options 2/3 may be used in subclauses to guarantee 
complete matching of parent {{Spans}}; option 4 results in very thorough 
scoring. Option 5 would be an unusual use case; but I think there are some 
applications for full combinatoric matching, and the option was well supported 
by the implementation, so it is included for the sake of completeness.

The candidate implementation models the match graph as a kind of 2-dimensional 
queue that supports random-access seek and arbitrary node removal. A more 
thorough explanation would be unwieldy in a comment, so I wrote [three 
posts|https://michaelgibney.net/lucene/graph/], which respectively:
 # [Provide some 
background|https://michaelgibney.net/2018/09/lucene-graph-queries-1/] on the 
problem associated with LUCENE-7398 (this post is heavily informed by the 
discussion on this issue)
 # [Describe the candidate 
implementation|https://michaelgibney.net/2018/09/lucene-graph-queries-2/] in 
some detail (also includes information on how to configure/test/evaluate)
 # 

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1650 - Unstable

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1650/

2 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[https://127.0.0.1:34681/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:46423/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[https://127.0.0.1:34681/solr/MoveReplicaHDFSTest_failed_coll_true, 
https://127.0.0.1:46423/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([BE088EB3B0980532:14C55D41074BD0E2]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 

[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630552#comment-16630552
 ] 

Shawn Heisey commented on SOLR-12814:
-

I started a 7.5.0 cloud example, created 50 collections, and tried two things:

1) /solr/admin/metrics
2) the "Cloud->Nodes" tab in the UI.

The metrics page worked just fine -- the URL was very short, and that didn't 
result in additional requests with a long URL.  The Nodes tab did not work, and 
caused an error in the log about the URI being too large.

I would think that it should be possible for the UI to use POST for the 
requests on the "nodes" tab instead of GET.  This is not within user control, 
it would have to be done in the code.


> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630547#comment-16630547
 ] 

Jan Høydahl commented on SOLR-12798:


{quote}you will note that all parameters and metadata are folded into the URL 
for the ContentWriter transmission mechanism
{quote}
I don't get it. What parameters and metadata are we talking about here, that 
you wish to send to Solr's standard {{/update}} handler? All the document 
fields and metadata would go in the POST body, not? Please give an example of 
this type 2) request. Does not need to be an example with a large request, just 
any request using MCF's Tika component and then how things look like when 
attempting to POST that content to Solr's {{/update}} endpoint.

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630531#comment-16630531
 ] 

Karl Wright commented on SOLR-12798:


{quote}
This looks to me like a plain Solr document post to /update handler, in 
whatever format you'd like? If you can take adavantage of Noble Paul's 
enhancements to stream the content this can still be a plain document not 
needing multipart, and no need sending data in http params?
{quote}

The streaming part is great.  But if you look at the current master 
implementation of HttpSolrClient, you will note that all parameters and 
metadata are folded into the URL for the ContentWriter transmission mechanism.  
This fails for us because the URL size can easily exceed 8192 bytes.  That is 
why we need the multipart post handling even for 
UpdateRequest/SolrInputDocument requests.


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2018-09-27 Thread Michael Gibney (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630529#comment-16630529
 ] 

Michael Gibney commented on LUCENE-7398:


I have a branch containing a candidate fix for this issue: 
[LUCENE-7398/master|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/master]

It includes support for complete graph-based matching, configurable to include:
 # all valid top-level {{startPosition}} s
 # all valid match lengths (in the {{startPosition - endPosition}} sense)
 # all valid match {{width}} s (in the slop sense)
 # all redundant matches (different {{Term}} s, same {{startPosition}}, 
{{endPosition}}, and {{width}})
 # all possible valid combinations of subclause positions

Option 1 is appropriate for top-level matching and document matching (and is 
complete for that use case); options 2/3 may be used in subclauses to guarantee 
complete matching of parent {{Spans}}; option 4 results in very thorough 
scoring. Option 5 would be an unusual use case; but I think there are some 
applications for full combinatoric matching, and the option was well supported 
by the implementation, so it is included for the sake of completeness.

The candidate implementation models the match graph as a kind of 2-dimensional 
queue that supports random-access seek and arbitrary node removal. A more 
thorough explanation would be unwieldy in a comment, so I wrote [three 
posts|https://michaelgibney.net/lucene/graph/], which respectively:
 # [Provide some 
background|https://michaelgibney.net/2018/09/lucene-graph-queries-1/] on the 
problem associated with LUCENE-7398 (this post is heavily informed by the 
discussion on this issue)
 # [Describe the candidate 
implementation|https://michaelgibney.net/2018/09/lucene-graph-queries-2/] in 
some detail (also includes information on how to configure/test/evaluate)
 # [Anticipate some possible 
consequences/applications|https://michaelgibney.net/2018/09/lucene-graph-queries-3/]
 of new functionality that would be enabled by this (or other equivalent) fix

Some notes:
 # The branch contains (and passes) all tests proposed so far in association 
with this issue (and also quite a few additional tests)
 # The candidate implementation is made more complete and performant by the 
addition of some extra information in the index (e.g., {{positionLength}}). 
This extra information is currently stored using {{Payload}} s, though for 
{{positionLength}} at least, there has been some discussion of integrating it 
more directly in the index (see LUCENE-4312)
 # Some version of this code has been running in production for several months, 
and has given no indication of instability, even running every user phrase 
query (both explicit and {{pf}}) as a graph query.
 # To facilitate evaluation, the fix is integrated in 
[master|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/master], 
[branch_7x|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7x], 
[branch_7_5|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_5],
 and 
[branch_7_4|https://github.com/magibney/lucene-solr/tree/LUCENE-7398/branch_7_4].

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12813) SolrCloud + Basic Authentication + subquery = 401 Exception

2018-09-27 Thread Igor Fedoryn (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630520#comment-16630520
 ] 

Igor Fedoryn commented on SOLR-12813:
-

Thanks :)

> SolrCloud + Basic Authentication + subquery = 401 Exception
> ---
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5
>Reporter: Igor Fedoryn
>Priority: Major
> Attachments: screen1.png, screen2.png
>
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12813) SolrCloud + Basic Authentication + subquery = 401 Exception

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630511#comment-16630511
 ] 

Jan Høydahl commented on SOLR-12813:


The issue is SOLR-12583 and it is linked to this Jira if you look under the 
"Issue Links" section further up :) 

> SolrCloud + Basic Authentication + subquery = 401 Exception
> ---
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5
>Reporter: Igor Fedoryn
>Priority: Major
> Attachments: screen1.png, screen2.png
>
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630508#comment-16630508
 ] 

Jan Høydahl commented on SOLR-12798:


Ok, let's keep the discussion about standard handlers. Then when MCF is not 
going to stream a huge binary file to Solr but rather send one Solr document 
with one potentially huge plain-text content field and several other metadata 
fields. This looks to me like a plain Solr document post to /update handler, in 
whatever format you'd like? If you can take adavantage of Noble Paul's 
enhancements to stream the content this can still be a plain document not 
needing multipart, and no need sending data in http params?

However, if you have a use case where you both need to post some binary blob to 
Solr Cell and also need to pass huge metadata in literal params, then things 
would be different. But I have not seen such a usecase yet?

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Fwd: [elevated] document transformer doesn't work with 7.4 version

2018-09-27 Thread Georgy Khotyan
Hello. I've found the problem:

Request with fl=[elevated] returns NullPointerException when Solr 7.4/7.5
used. It works with all older versions.

Example:
http://localhost:8983/solr/my-core/select?q=*:*=true=1,2,3=true=[elevated]

Is it a bug of 7.4 version?

Exception:

{ "error":{ "trace":"java.lang.NullPointerException\n\tat
org.apache.solr.response.transform.BaseEditorialTransformer.getKey(BaseEditorialTransformer.java:72)\n\tat
org.apache.solr.response.transform.BaseEditorialTransformer.transform(BaseEditorialTransformer.java:52)\n\tat
org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:120)\n\tat
org.apache.solr.response.DocsStreamer.next(DocsStreamer.java:57)\n\tat
org.apache.solr.response.TextResponseWriter.writeDocuments(TextResponseWriter.java:275)\n\tat
org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:161)\n\tat
org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209)\n\tat
org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325)\n\tat
org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120)\n\tat
org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71)\n\tat
org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65)\n\tat
org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:806)\n\tat
org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:535)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)\n\tat
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)\n\tat
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)\n\tat
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)\n\tat
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)\n\tat
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)\n\tat
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)\n\tat
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)\n\tat
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)\n\tat
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)\n\tat
org.eclipse.jetty.server.Server.handle(Server.java:534)\n\tat
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)\n\tat
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)\n\tat
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)\n\tat
org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)\n\tat
org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)\n\tat
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)\n\tat
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)\n\tat
java.lang.Thread.run(Thread.java:748)\n", "code":500}}


[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630499#comment-16630499
 ] 

Jan Høydahl commented on SOLR-12814:


Looks like you are talking about the metrics HTTP API. Have you tried to simply 
POST your request instead of GET? That is a common way to allow huge requests. 
See e.g. 
[https://superuser.com/questions/149329/what-is-the-curl-command-line-syntax-to-do-a-post-request]
 

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630482#comment-16630482
 ] 

Jan Høydahl commented on SOLR-12814:


{quote}...the new metrics page in version 7.5 can't run its queries...
{quote}
Please be more specific on which metrics page your are referring to, and 
provide steps to reproduce. Is it the "Cloud->Nodes" page?

> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, httpparser, large, metrics, solr, 
> solrcloud, too, uri
> Attachments: longmetricsquery.txt
>
>
> If you have a lot of collections, like 50 or more, the new metrics page in 
> version 7.5 can't run its queries because the default 
> solr.jetty.request.header.size and solr.jetty.response.header.size values are 
> too small.
> If I up the header values from 8192 to 65536 the commands will work.
> command to change the defaults:
> {code:java}
> sed -i 's/\"solr.jetty.request.header.size\" 
> default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml 
> sed -i 's/\"solr.jetty.response.header.size\" 
> default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
> /opt/solr/server/etc/jetty.xml
> {code}
> before changing the header size log: 
> {code:java}
> 2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> 2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall 
> [admin] webapp=null path=/admin/metrics 
> params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
>  status=0 QTime=0
> 2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
> too large >8192
> {code}
> After changing the header size log:
> {code:java}
> attached as a file because its very long and ugly{code}
> Is it possible to break up this command into batches so that it can run 
> without modifying the header sizes? 
> Thanks!
>  -Matt
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12813) SolrCloud + Basic Authentication + subquery = 401 Exception

2018-09-27 Thread Igor Fedoryn (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630477#comment-16630477
 ] 

Igor Fedoryn commented on SOLR-12813:
-

I can't find issue like (subquery don't work with Basic Authentication in 
SolrCloud).
Please give me a link to this issue that I could track it.

> SolrCloud + Basic Authentication + subquery = 401 Exception
> ---
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5
>Reporter: Igor Fedoryn
>Priority: Major
> Attachments: screen1.png, screen2.png
>
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12813) SolrCloud + Basic Authentication + subquery = 401 Exception

2018-09-27 Thread JIRA


 [ 
https://issues.apache.org/jira/browse/SOLR-12813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-12813.

Resolution: Duplicate

Closing as duplicate. Please search Jira for existing issues before creating a 
new.

> SolrCloud + Basic Authentication + subquery = 401 Exception
> ---
>
> Key: SOLR-12813
> URL: https://issues.apache.org/jira/browse/SOLR-12813
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security, SolrCloud
>Affects Versions: 6.4.1, 7.5
>Reporter: Igor Fedoryn
>Priority: Major
> Attachments: screen1.png, screen2.png
>
>
> Environment: * Solr 6.4.1
>  * Zookeeper 3.4.6
>  * Java 1.8
> Run Zookeeper
> Upload simple configuration wherein the Solr schema has fields for a 
> relationship between parent/child
> Run two Solr instance (2 nodes)
> Create the collection with 1 shard on each Solr nodes
>  
> Add parent document to one shard and child document to another shard.
> The response for: * 
> /select?q=ChildIdField:VALUE=*,parents:[subqery]=\{!term f=id 
> v=$row.ParentIdsField}
> correct.
>  
> After that add Basic Authentication with some user for collection.
> Restart Solr or reload Solr collection.
> If the simple request /select?q=*:* with authorization on Solr server is a 
> success then run previously request
> with authorization on Solr server and you get the exception: "Solr HTTP 
> error: Unauthorized (401) "
>  
> Screens in the attachment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-7.x-Linux (64bit/jdk-12-ea+12) - Build # 2817 - Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2817/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Collection not found: cdcr-cluster2

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: cdcr-cluster2
at 
__randomizedtesting.SeedInfo.seed([ABC61D236C929072:EE1DEDC174BCDC30]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:484)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:463)
at 
org.apache.solr.cloud.cdcr.CdcrTestsUtil.waitForClusterToSync(CdcrTestsUtil.java:125)
at 
org.apache.solr.cloud.cdcr.CdcrTestsUtil.waitForClusterToSync(CdcrTestsUtil.java:118)
at 
org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:100)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Julien Massiera (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630406#comment-16630406
 ] 

Julien Massiera commented on SOLR-12798:


[~kwri...@metacarta.com], [~janhoy],

actually the provided example IS of type 2), as I mentioned, the handler used 
on Solr side is a modified /update handler, not an /extract, the name is 
misleading I would have renammed it as /update/no-tika and here is its 
declaration in the solrconfig.xml file :
{code:java}

  
true
ignored_
ignored_
ignored_
datafari
  

{code}
It is not using Tika and understands literal.xxx parameters, so, from my point 
of view, no need to discuss about this...
 

> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matthew medway updated SOLR-12814:
--
Description: 
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml 
sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log: 
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
 -Matt

 

  was:
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml 
sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

 
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
 -Matt

 


> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>Reporter: matthew medway
>Priority: Major
>  Labels: URI, header, http, 

[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matthew medway updated SOLR-12814:
--
Description: 
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml 
sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

 
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
 -Matt

 

  was:
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml 
sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

2018-09-27 13:06:45.430 WARN (qtp534906248-16) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
 -Matt

 


> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper and 3x solr cloud servers,
> 50 test collections with 0 data in them
>

[jira] [Updated] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

matthew medway updated SOLR-12814:
--
Description: 
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml 
sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

2018-09-27 13:06:45.430 WARN (qtp534906248-16) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
 -Matt

 

  was:
If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

2018-09-27 13:06:45.430 WARN (qtp534906248-16) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
-Matt

 


> Metrics page causing "HttpParser URI is too large >8192"
> 
>
> Key: SOLR-12814
> URL: https://issues.apache.org/jira/browse/SOLR-12814
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics, SolrCloud
>Affects Versions: 7.5
> Environment: 3x zookeeper 

[jira] [Created] (SOLR-12814) Metrics page causing "HttpParser URI is too large >8192"

2018-09-27 Thread matthew medway (JIRA)
matthew medway created SOLR-12814:
-

 Summary: Metrics page causing "HttpParser URI is too large >8192"
 Key: SOLR-12814
 URL: https://issues.apache.org/jira/browse/SOLR-12814
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics, SolrCloud
Affects Versions: 7.5
 Environment: 3x zookeeper and 3x solr cloud servers,

50 test collections with 0 data in them
Reporter: matthew medway
 Attachments: longmetricsquery.txt

If you have a lot of collections, like 50 or more, the new metrics page in 
version 7.5 can't run its queries because the default 
solr.jetty.request.header.size and solr.jetty.response.header.size values are 
too small.

If I up the header values from 8192 to 65536 the commands will work.

command to change the defaults:
{code:java}
sed -i 's/\"solr.jetty.request.header.size\" 
default=\"8192\"/\"solr.jetty.request.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml sed -i 's/\"solr.jetty.response.header.size\" 
default=\"8192\"/\"solr.jetty.response.header.size\" default=\"65536\"/g' 
/opt/solr/server/etc/jetty.xml
{code}
before changing the header size log:

2018-09-27 13:06:45.430 WARN (qtp534906248-16) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code:java}
2018-09-27 13:06:45.434 INFO (qtp534906248-14) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:07:45.527 WARN (qtp534906248-17) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:07:45.530 INFO (qtp534906248-16) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:08:45.621 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
2018-09-27 13:08:45.625 INFO (qtp534906248-15) [ ] o.a.s.s.HttpSolrCall [admin] 
webapp=null path=/admin/metrics 
params={wt=javabin=2=solr.jvm:os.processCpuLoad=solr.node:CONTAINER.fs.coreRoot.usableSpace=solr.jvm:os.systemLoadAverage=solr.jvm:memory.heap.used}
 status=0 QTime=0
2018-09-27 13:09:45.725 WARN (qtp534906248-20) [ ] o.e.j.h.HttpParser URI is 
too large >8192
{code}
After changing the header size log:
{code:java}
attached as a file because its very long and ugly{code}
Is it possible to break up this command into batches so that it can run without 
modifying the header sizes? 

Thanks!
-Matt

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 2078 - Still Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/2078/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.cloud.MoveReplicaHDFSTest.testFailedMove

Error Message:
No live SolrServers available to handle this 
request:[http://127.0.0.1:48059/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:48410/solr/MoveReplicaHDFSTest_failed_coll_true]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this 
request:[http://127.0.0.1:48059/solr/MoveReplicaHDFSTest_failed_coll_true, 
http://127.0.0.1:48410/solr/MoveReplicaHDFSTest_failed_coll_true]
at 
__randomizedtesting.SeedInfo.seed([170C1ACFF2FD93F3:BDC1C93D452E4623]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:462)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1107)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:994)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at 
org.apache.solr.cloud.MoveReplicaTest.testFailedMove(MoveReplicaTest.java:301)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1742)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:935)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:971)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:985)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:944)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:830)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:880)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:891)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS-EA] Lucene-Solr-master-Linux (64bit/jdk-12-ea+12) - Build # 22932 - Still Unstable!

2018-09-27 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22932/
Java: 64bit/jdk-12-ea+12 -XX:+UseCompressedOops -XX:+UseParallelGC

34 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.client.solrj.io.stream.StreamDecoratorTest

Error Message:
14 threads leaked from SUITE scope at 
org.apache.solr.client.solrj.io.stream.StreamDecoratorTest: 1) 
Thread[id=334, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)2) 
Thread[id=1411, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)3) 
Thread[id=341, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)4) 
Thread[id=1405, 
name=TEST-StreamDecoratorTest.testExecutorStream-seed#[D50D2FC9C1C37453]-SendThread(127.0.0.1:35223),
 state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)5) 
Thread[id=1407, name=zkConnectionManagerCallback-648-thread-1, state=WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1054)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1114)
 at 
java.base@12-ea/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)6) 
Thread[id=335, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D50D2FC9C1C37453]-SendThread(127.0.0.1:35223),
 state=TIMED_WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.zookeeper.client.StaticHostProvider.next(StaticHostProvider.java:105)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1000)
 at 
app//org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1063)7) 
Thread[id=1419, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)8) 
Thread[id=340, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)9) 
Thread[id=1410, name=Connection evictor, state=TIMED_WAITING, 
group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/java.lang.Thread.sleep(Native Method) at 
app//org.apache.http.impl.client.IdleConnectionEvictor$1.run(IdleConnectionEvictor.java:66)
 at java.base@12-ea/java.lang.Thread.run(Thread.java:835)   10) 
Thread[id=336, 
name=TEST-StreamDecoratorTest.testParallelExecutorStream-seed#[D50D2FC9C1C37453]-EventThread,
 state=WAITING, group=TGRP-StreamDecoratorTest] at 
java.base@12-ea/jdk.internal.misc.Unsafe.park(Native Method) at 
java.base@12-ea/java.util.concurrent.locks.LockSupport.park(LockSupport.java:194)
 at 
java.base@12-ea/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2081)
 at 
java.base@12-ea/java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:433)
 at 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 326 - Still Failing

2018-09-27 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/326/

No tests ran.

Build Log:
[...truncated 23296 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2430 links (1982 relative) to 3170 anchors in 245 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.6.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:

[jira] [Commented] (SOLR-12648) Autoscaling framework based replica placement is not used unless a policy is specified or non-empty cluster policy exists

2018-09-27 Thread Shalin Shekhar Mangar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630220#comment-16630220
 ] 

Shalin Shekhar Mangar commented on SOLR-12648:
--

Patch that builds on master after the SOLR-12756 commit.

> Autoscaling framework based replica placement is not used unless a policy is 
> specified or non-empty cluster policy exists
> -
>
> Key: SOLR-12648
> URL: https://issues.apache.org/jira/browse/SOLR-12648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Affects Versions: 7.4
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12648.patch, SOLR-12648.patch, SOLR-12648.patch
>
>
> Assign.java has a piece of code to decide which placement framework to use 
> (we have three today):
> {code}
> if (rulesMap == null && policyName == null && 
> autoScalingConfig.getPolicy().getClusterPolicy().isEmpty())
> {code}
> Note that the presence of cluster preferences is not a criteria. So even if a 
> user adds cluster preferences, they will not be respected unless a policy 
> also exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12648) Autoscaling framework based replica placement is not used unless a policy is specified or non-empty cluster policy exists

2018-09-27 Thread Shalin Shekhar Mangar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-12648:
-
Attachment: SOLR-12648.patch

> Autoscaling framework based replica placement is not used unless a policy is 
> specified or non-empty cluster policy exists
> -
>
> Key: SOLR-12648
> URL: https://issues.apache.org/jira/browse/SOLR-12648
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, SolrCloud
>Affects Versions: 7.4
>Reporter: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 7.6, master (8.0)
>
> Attachments: SOLR-12648.patch, SOLR-12648.patch, SOLR-12648.patch
>
>
> Assign.java has a piece of code to decide which placement framework to use 
> (we have three today):
> {code}
> if (rulesMap == null && policyName == null && 
> autoScalingConfig.getPolicy().getClusterPolicy().isEmpty())
> {code}
> Note that the presence of cluster preferences is not a criteria. So even if a 
> user adds cluster preferences, they will not be respected unless a policy 
> also exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12798) Structural changes in SolrJ since version 7.0.0 have effectively disabled multipart post

2018-09-27 Thread Karl Wright (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16630203#comment-16630203
 ] 

Karl Wright edited comment on SOLR-12798 at 9/27/18 11:04 AM:
--

[~janhoy], the example we provided is using type (1) output configuration, as 
Julien noted.  Do you want a type (2) example?  It will not change the need for 
multipart post.



was (Author: kwri...@metacarta.com):
[~janhoy], the example we provided is using type (1), as Julien noted.  Do you 
want a type (2) example?


> Structural changes in SolrJ since version 7.0.0 have effectively disabled 
> multipart post
> 
>
> Key: SOLR-12798
> URL: https://issues.apache.org/jira/browse/SOLR-12798
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ
>Affects Versions: 7.4
>Reporter: Karl Wright
>Assignee: Karl Wright
>Priority: Major
> Attachments: HOT Balloon Trip_Ultra HD.jpg, 
> SOLR-12798-approach.patch, solr-update-request.txt
>
>
> Project ManifoldCF uses SolrJ to post documents to Solr.  When upgrading from 
> SolrJ 7.0.x to SolrJ 7.4, we encountered significant structural changes to 
> SolrJ's HttpSolrClient class that seemingly disable any use of multipart 
> post.  This is critical because ManifoldCF's documents often contain metadata 
> in excess of 4K that therefore cannot be stuffed into a URL.
> The changes in question seem to have been performed by Paul Noble on 
> 10/31/2017, with the introduction of the RequestWriter mechanism.  Basically, 
> if a request has a RequestWriter, it is used exclusively to write the 
> request, and that overrides the stream mechanism completely.  I haven't 
> chased it back to a specific ticket.
> ManifoldCF's usage of SolrJ involves the creation of 
> ContentStreamUpdateRequests for all posts meant for Solr Cell, and the 
> creation of UpdateRequests for posts not meant for Solr Cell (as well as for 
> delete and commit requests).  For our release cycle that is taking place 
> right now, we're shipping a modified version of HttpSolrClient that ignores 
> the RequestWriter when dealing with ContentStreamUpdateRequests.  We 
> apparently cannot use multipart for all requests because on the Solr side we 
> get "pfountz Should not get here!" errors on the Solr side when we do, which 
> generate HTTP error code 500 responses.  That should not happen either, in my 
> opinion.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >