haha, can't have that now!
--
*John Blythe*
Product Manager & Lead Developer
251.605.3071 | j...@curvolabs.com
www.curvolabs.com
58 Adams Ave
Evansville, IN 47713
On Fri, Aug 11, 2017 at 2:44 PM, Erick Erickson
wrote:
> Thanks for closing this out, I was breaking
Thanks for closing this out, I was breaking out in hives ;)
Erick
On Fri, Aug 11, 2017 at 11:31 AM, John Blythe wrote:
> Looks like part of our nightly processing was restarting the solr server
> before all indexing was done bc of using a blunt object approach of doing
> so
Looks like part of our nightly processing was restarting the solr server
before all indexing was done bc of using a blunt object approach of doing
so at designated times, doh!
On Tue, Aug 8, 2017 at 9:35 PM John Blythe wrote:
> Thanks Erick. I don't think all of those ifs
Thanks Erick. I don't think all of those ifs are in place. Must be
something in our nightly process that is conflicting. Will dive in tomorrow
to figure out and report back.
On Tue, Aug 8, 2017 at 1:27 PM Erick Erickson
wrote:
> First, are you absolutely sure you're
First, are you absolutely sure you're committing before shutting down?
Hard commit in this case, openSearcher shouldn't matter.
SolrCloud? And if not SolrCloud, how are you shutting Solr down? "Kill
-9" is evil.
If you have transaction logs enabled then you shouldn't be losing
docs, any
hi all.
i have a core that contains about 22 million documents. when the solr
server is restarted it drops to 200-400k. the dashbaord says that it's both
optimized and current.
is there config issues i need to address in solr or the server? not really
sure where to begin in hunting this down.