Re: Dataimport handler showing idle status with multiple shards

2017-12-05 Thread Sarah Weissman


From: Shawn Heisey <elyog...@elyograg.org>
Reply-To: "solr-user@lucene.apache.org" <solr-user@lucene.apache.org>
Date: Tuesday, December 5, 2017 at 1:31 PM
To: "solr-user@lucene.apache.org" <solr-user@lucene.apache.org>
Subject: Re: Dataimport handler showing idle status with multiple shards

On 12/5/2017 10:47 AM, Sarah Weissman wrote:
I’ve recently been using the dataimport handler to import records from a 
database into a Solr cloud collection with multiple shards. I have 6 dataimport 
handlers configured on 6 different paths all running simultaneously against the 
same DB. I’ve noticed that when I do this I often get “idle” status from the 
DIH even when the import is still running. The percentage of the time I get an 
“idle” response seems proportional to the number of shards. I.e., with 1 shard 
it always shows me non-idle status, with 2 shards I see idle about half the 
time I check the status, with 96 shards it seems to be showing idle almost all 
the time. I can see the size of each shard increasing, so I’m sure the import 
is still going.

I recently switched from 6.1 to 7.1 and I don’t remember this happening in 6.1. 
Does anyone know why the DIH would report idle when it’s running?

e.g.:
curl http://myserver:8983/solr/collection/dataimport6



To use DIH with SolrCloud, you should be sending your request directly
to a shard replica core, not the collection, so that you can be
absolutely certain that the import command and the status command are
going to the same place.  You MIGHT need to also have a distrib=false
parameter on the request, but I do not know whether that is required to
prevent the load balancing on the dataimport handler.



Thanks for the information, Shawn. I am relatively new to Solr cloud and I am 
used to running the dataimport from the admin dashboard, where it happens at 
the collection level, so I find it surprising that the right way to do this is 
at the core level. So, if I want to be able to check the status of my data 
import for N cores I would need to create N different data import configs that 
manually partition the collection and start each different config on a 
different core? That seems like it could get confusing. And then if I wanted to 
grow or shrink my shards I’d have to rejigger my data import configs every 
time. I kind of expect a distributed index to hide these details from me.

I only have one node at the moment, and I don’t understand how Solr cloud works 
internally well enough to understand what it means for the data import to be 
running on a shard vs. a node. It would be nice if doing a status query would 
at least tell you something, like the number of documents last indexed on that 
core, even if nothing is currently running. That way at least I could 
extrapolate how much longer the operation will take.



Dataimport handler showing idle status with multiple shards

2017-12-05 Thread Sarah Weissman
Hi,

I’ve recently been using the dataimport handler to import records from a 
database into a Solr cloud collection with multiple shards. I have 6 dataimport 
handlers configured on 6 different paths all running simultaneously against the 
same DB. I’ve noticed that when I do this I often get “idle” status from the 
DIH even when the import is still running. The percentage of the time I get an 
“idle” response seems proportional to the number of shards. I.e., with 1 shard 
it always shows me non-idle status, with 2 shards I see idle about half the 
time I check the status, with 96 shards it seems to be showing idle almost all 
the time. I can see the size of each shard increasing, so I’m sure the import 
is still going.

I recently switched from 6.1 to 7.1 and I don’t remember this happening in 6.1. 
Does anyone know why the DIH would report idle when it’s running?

e.g.:
curl http://myserver:8983/solr/collection/dataimport6
{
  "responseHeader":{
"status":0,
"QTime":0},
  "initArgs":[
"defaults",[
  "config","data-config6.xml"]],
  "status":"idle",
  "importResponse":"",
  "statusMessages":{}}

Thanks,
Sarah