[
https://issues.apache.org/jira/browse/SOLR-3504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16246444#comment-16246444
]
Varun Thacker commented on SOLR-3504:
-------------------------------------
I haven't looked closely but if we have over 2B docs spread across multiple
shards in the single collection and we do a match all query , does Solr deal
with it correctly?
> Clearly document the limit for the maximum number of documents in a single
> index
> --------------------------------------------------------------------------------
>
> Key: SOLR-3504
> URL: https://issues.apache.org/jira/browse/SOLR-3504
> Project: Solr
> Issue Type: Improvement
> Components: documentation, update
> Affects Versions: 3.6
> Reporter: Jack Krupansky
> Assignee: Cassandra Targett
> Priority: Minor
> Fix For: 7.2, master (8.0)
>
>
> Although the actual limit to the number of documents supported by a Solr
> implementation depends on the number of shards, unless the user is intimately
> familiar with the implementation of Lucene, they may not realize that a
> single Solr index (single shard, single core) is limited to approximately
> 2.14 billion documents regardless of their processing power or memory. This
> limit should be clearly documented for the Solr user.
> Granted, users should be strongly discouraged from attempting to create a
> single, unsharded index of that size, but they certainly should have to find
> out about the Lucene limit by accident.
> A subsequent issue will recommend that Solr detect and appropriately report
> to the user when and if this limit is hit.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]