This issue was finally resolved. Adding an explicit Host - IP address mapping on /etc/host file seemed to do the trick. The one strange thing is - before the host file entry was made - we were unable to simulate the 5 second delay from the linux shell by performing a simple nslookup <host name>. In any case - the issue now stands resolved - Thanks to all.
On the other discussion item about the QTime in the SolrQueryResponse NOT matching the QTime in the Solr.log, here is what I found: 1. If the Query from CloudSolrServer hit the right node (i.e. contains the shard with the desired dataset), then the QTimes match 2. If the Query from CloudSolrServer hits a node (NodeX) that does NOT contain our data - then Solr routes the request to the right node (NodeY) to fetch the data. In such situations - QTime in logged in both nodes that the query passes through - albeit with different values. The QTime logged on NodeX matches what we see on SolrQueryResponse - and this time includes the time for inter-node communication between NodeX and NodeY. In essence this means that the QTime in SolrQueryResponse is NOT always a representation of the query time - but could include time spent for inter-node communication. P.S. All of the above statements were made in context of a sharding strategy to co-locate a single customer's document into a single shard. Here is a short wishlist based on the experience in debugging this issue: 1. Wish SolrQueryResponse could contain a list of node names / shard-replica names that a request passed through for processing the query (when debug is turned ON) 2. Wish SolrQueryResponse could provide a breakup of QTime on each of the individual nodes / shard-replicas - instead of returning a single value of QTime -- View this message in context: http://lucene.472066.n3.nabble.com/Slow-QTimes-5-seconds-for-Small-sized-Collections-tp4143681p4145251.html Sent from the Solr - User mailing list archive at Nabble.com.