[
https://issues.apache.org/jira/browse/SOLR-7681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14588894#comment-14588894
]
Hoss Man commented on SOLR-7681:
--------------------------------
HTTP Proxies can cache GET requests more easily to reduce load on the solr
servers when clients frequently execute the same queries. it's also a lot
easier to make sense of what's going on in log files.
I think the best "solution" is:
* figure out how to increase this limit in the jetty stack
* figure out how to improve the HTTP Status error message returned when the
limit is exceeded anyway to make it clear what the problem is (and suggest
using POST).
* make sure all of the various HTTP based SolrClient imples make it easy to use
POST on any/all request so that people who don't care about caching can use it.
(and make sure we have a test that POST is in fact used in this case.
> HttpSolrClient fails with a confusing error when a GET request is too big
> -------------------------------------------------------------------------
>
> Key: SOLR-7681
> URL: https://issues.apache.org/jira/browse/SOLR-7681
> Project: Solr
> Issue Type: Bug
> Reporter: Ramkumar Aiyengar
> Priority: Minor
>
> If a request is sent with too long an URL for GET, the Solr server responds
> as follows:
> {code}
> HTTP/1.1 413 FULL head
> Content-Length: 0
> Connection: close
> Server: Jetty(8.1.10.v20130312)
> {code}
> {{oascsi.HttpSolrServer.executeMethod}} currently in such a situation, goes
> ahead and tries to parse a {{Content-Type}} header in such a case and ends up
> with a {{SolrServerException: Error executing query}} caused by
> {{org.apache.http.ParseException: Invalid content type}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]