Probably more than you want to know about commits, hard and soft:
https://lucidworks.com/blog/2013/08/23/understanding-transaction-logs-softcommit-and-commit-in-sorlcloud/
Best,
Erick
On Thu, Mar 10, 2016 at 3:40 PM, Shawn Heisey wrote:
> On 3/10/2016 4:06 PM, Steven White wrote:
>> Last questio
On 3/10/2016 4:06 PM, Steven White wrote:
> Last question on this topic (maybe), wouldn't a commit at the very end take
> too long on a 1 billion items? Wouldn't a commit every, lets say 10,000
> items be more efficient?
The behavior that I have witnessed suggests that commit speed on a
well-tune
Got it.
Last question on this topic (maybe), wouldn't a commit at the very end take
too long on a 1 billion items? Wouldn't a commit every, lets say 10,000
items be more efficient?
Steve
On Thu, Mar 10, 2016 at 5:44 PM, Shawn Heisey wrote:
> On 3/10/2016 3:29 PM, Steven White wrote:
> > Thank
On 3/10/2016 3:29 PM, Steven White wrote:
> Thanks you for your insight Shawn, they are always valuable.
>
> Question, if I wait to the very end to issue a commit, wouldn't that mean I
> could lose everything if there was an OOM or some other server issue? I
> don't have any commit setting set in
Thanks you for your insight Shawn, they are always valuable.
Question, if I wait to the very end to issue a commit, wouldn't that mean I
could lose everything if there was an OOM or some other server issue? I
don't have any commit setting set in my solrconfig.xml.
Steve
On Wed, Mar 9, 2016 at 8
On 3/9/2016 6:10 PM, Steven White wrote:
> I'm indexing about 1 billion records (each are small Solr doc, no more than
> 20 bytes each). The logic is basically as follows:
>
> while (data-of-1-billion) {
> read-1000-items from DB
> at-100-items send 100 items to Solr: i.e.:
> s
Hi folks,
I'm indexing about 1 billion records (each are small Solr doc, no more than
20 bytes each). The logic is basically as follows:
while (data-of-1-billion) {
read-1000-items from DB
at-100-items send 100 items to Solr: i.e.:
solrConnection.add(docs);
}
solrConn