Re: Flush activity and dropped messages

2016-08-26 Thread Patrick McFadin
It's not that your disks are getting full. I suspect you don't have enough
throughput to handle the type of stress compaction and memtable flushing
produce. Blocked flush writers is almost always a disk problem.

Any storage with the words SAN, NAS, NFS or SATA in them, is going to make
your life miserable. Avoid them at all costs.

Take a look at this doc and scan down to the section on disks:
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html

Patrick


On Fri, Aug 26, 2016 at 9:07 AM, Vasileios Vlachos <
vasileiosvlac...@gmail.com> wrote:

> Hi Patrick and thanks for your reply,
>
> We are monitoring disk usage and more and we don't seem to be running out
> of space at the moment. We have separate partitions/disks for
> commitlog/data.  Which one do you suspect and why?
>
> Regards,
> Vasilis
>
> On 25 Aug 2016 4:01 pm, "Patrick McFadin"  wrote:
>
> This looks like you've run out of disk. What are your hardware specs?
>
> Patrick
>
>
> On Thursday, August 25, 2016, Benedict Elliott Smith 
> wrote:
>
>> You should update from 2.0 to avoid this behaviour, is the simple
>> answer.  You are correct that when the commit log gets full the memtables
>> are flushed to make room.  2.0 has several interrelated problems here
>> though:
>>
>> There is a maximum flush queue length property (I cannot recall its
>> name), and once there are this many memtables flushing, no more writes can
>> take place on the box, whatsoever.  You cannot simply increase this length,
>> though, because that shrinks the maximum size of any single memtable (it
>> is, iirc, total_memtable_space / (1 + flush_writers + max_queue_length)),
>> which worsens write-amplification from compaction.
>>
>> This is because the memory management for memtables in 2.0 was really
>> terrible, and this queue length was used to try to ensure the space
>> allocated was not exceeded.
>>
>> Compounding this, when clearing the commit log 2.0 will flush all
>> memtables with data in them regardless of it is useful to do so, meaning
>> having more tables (that are actively written to) than your max queue
>> length will necessarily cause stalls every time you run out of commit log
>> space.
>>
>> In 2.1, none of these concerns apply.
>>
>>
>> On 24 August 2016 at 23:40, Vasileios Vlachos > > wrote:
>>
>>> Hello,
>>>
>>>
>>>
>>>
>>>
>>> We have an 8-node cluster spread out in 2 DCs, 4 nodes in each one. We
>>> run C* 2.0.17 on Ubuntu 12.04 at the moment.
>>>
>>>
>>>
>>>
>>> Our C# application often throws logs, which correlate with dropped
>>> messages (counter mutations usually) in our logs. We think that if a
>>> specific mutaiton stays in the queue for more than 5 seconds, Cassandra
>>> drops it. This is also suggested by these lines in system.log:
>>>
>>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>>> (line 912) MUTATION messages were dropped in last 5000 ms: 317 for internal
>>> timeout and 0 for cross node timeout
>>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>>> (line 912) COUNTER_MUTATION messages were dropped in last 5000 ms: 6 for
>>> internal timeout and 0 for cross node timeout
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>>> 55) Pool NameActive   Pending  Completed   Blocked
>>>  All Time Blocked
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>>> 70) ReadStage 0 0  245177190 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>>> 70) RequestResponseStage  0 0 3530334509 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>>> 70) ReadRepairStage   0 01549567 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>>> 70) MutationStage48  1380 2540965500 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>>> 70) ReplicateOnWriteStage 0 0  189615571 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>>> 70) GossipStage   0 0   20586077 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>>> 70) CacheCleanupExecutor  0 0  0 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>>> 70) MigrationStage0 0106 0
>>> 0
>>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>>> 70) MemoryMeter   0 0 303029 0
>>>

Re: Flush activity and dropped messages

2016-08-26 Thread Vasileios Vlachos
Hi Benedict,

This makes sense now. Thank you very much for your input.

Regards,
Vasilis

On 25 Aug 2016 10:30 am, "Benedict Elliott Smith" 
wrote:

> You should update from 2.0 to avoid this behaviour, is the simple answer.
> You are correct that when the commit log gets full the memtables are
> flushed to make room.  2.0 has several interrelated problems here though:
>
> There is a maximum flush queue length property (I cannot recall its name),
> and once there are this many memtables flushing, no more writes can take
> place on the box, whatsoever.  You cannot simply increase this length,
> though, because that shrinks the maximum size of any single memtable (it
> is, iirc, total_memtable_space / (1 + flush_writers + max_queue_length)),
> which worsens write-amplification from compaction.
>
> This is because the memory management for memtables in 2.0 was really
> terrible, and this queue length was used to try to ensure the space
> allocated was not exceeded.
>
> Compounding this, when clearing the commit log 2.0 will flush all
> memtables with data in them regardless of it is useful to do so, meaning
> having more tables (that are actively written to) than your max queue
> length will necessarily cause stalls every time you run out of commit log
> space.
>
> In 2.1, none of these concerns apply.
>
>
> On 24 August 2016 at 23:40, Vasileios Vlachos 
> wrote:
>
>> Hello,
>>
>>
>>
>>
>>
>> We have an 8-node cluster spread out in 2 DCs, 4 nodes in each one. We
>> run C* 2.0.17 on Ubuntu 12.04 at the moment.
>>
>>
>>
>>
>> Our C# application often throws logs, which correlate with dropped
>> messages (counter mutations usually) in our logs. We think that if a
>> specific mutaiton stays in the queue for more than 5 seconds, Cassandra
>> drops it. This is also suggested by these lines in system.log:
>>
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) MUTATION messages were dropped in last 5000 ms: 317 for internal
>> timeout and 0 for cross node timeout
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) COUNTER_MUTATION messages were dropped in last 5000 ms: 6 for
>> internal timeout and 0 for cross node timeout
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 55) Pool NameActive   Pending  Completed   Blocked
>>  All Time Blocked
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) ReadStage 0 0  245177190 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) RequestResponseStage  0 0 3530334509 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReadRepairStage   0 01549567 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) MutationStage48  1380 2540965500 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReplicateOnWriteStage 0 0  189615571 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) GossipStage   0 0   20586077 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) CacheCleanupExecutor  0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MigrationStage0 0106 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MemoryMeter   0 0 303029 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) ValidationExecutor0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) FlushWriter   1 5 322604 1
>>  8227
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) InternalResponseStage 0 0 35 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) AntiEntropyStage  0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) MemtablePostFlusher   1 5 424104 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) MiscStage  

Re: Flush activity and dropped messages

2016-08-26 Thread Vasileios Vlachos
Hi Patrick and thanks for your reply,

We are monitoring disk usage and more and we don't seem to be running out
of space at the moment. We have separate partitions/disks for
commitlog/data.  Which one do you suspect and why?

Regards,
Vasilis

On 25 Aug 2016 4:01 pm, "Patrick McFadin"  wrote:

This looks like you've run out of disk. What are your hardware specs?

Patrick


On Thursday, August 25, 2016, Benedict Elliott Smith 
wrote:

> You should update from 2.0 to avoid this behaviour, is the simple answer.
> You are correct that when the commit log gets full the memtables are
> flushed to make room.  2.0 has several interrelated problems here though:
>
> There is a maximum flush queue length property (I cannot recall its name),
> and once there are this many memtables flushing, no more writes can take
> place on the box, whatsoever.  You cannot simply increase this length,
> though, because that shrinks the maximum size of any single memtable (it
> is, iirc, total_memtable_space / (1 + flush_writers + max_queue_length)),
> which worsens write-amplification from compaction.
>
> This is because the memory management for memtables in 2.0 was really
> terrible, and this queue length was used to try to ensure the space
> allocated was not exceeded.
>
> Compounding this, when clearing the commit log 2.0 will flush all
> memtables with data in them regardless of it is useful to do so, meaning
> having more tables (that are actively written to) than your max queue
> length will necessarily cause stalls every time you run out of commit log
> space.
>
> In 2.1, none of these concerns apply.
>
>
> On 24 August 2016 at 23:40, Vasileios Vlachos 
> wrote:
>
>> Hello,
>>
>>
>>
>>
>>
>> We have an 8-node cluster spread out in 2 DCs, 4 nodes in each one. We
>> run C* 2.0.17 on Ubuntu 12.04 at the moment.
>>
>>
>>
>>
>> Our C# application often throws logs, which correlate with dropped
>> messages (counter mutations usually) in our logs. We think that if a
>> specific mutaiton stays in the queue for more than 5 seconds, Cassandra
>> drops it. This is also suggested by these lines in system.log:
>>
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) MUTATION messages were dropped in last 5000 ms: 317 for internal
>> timeout and 0 for cross node timeout
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) COUNTER_MUTATION messages were dropped in last 5000 ms: 6 for
>> internal timeout and 0 for cross node timeout
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 55) Pool NameActive   Pending  Completed   Blocked
>>  All Time Blocked
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) ReadStage 0 0  245177190 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) RequestResponseStage  0 0 3530334509 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReadRepairStage   0 01549567 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) MutationStage48  1380 2540965500 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReplicateOnWriteStage 0 0  189615571 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) GossipStage   0 0   20586077 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) CacheCleanupExecutor  0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MigrationStage0 0106 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MemoryMeter   0 0 303029 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) ValidationExecutor0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) FlushWriter   1 5 322604 1
>>  8227
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) InternalResponseStage 0 0 35 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) AntiEntropyStage  0 0  

Re: Flush activity and dropped messages

2016-08-25 Thread Patrick McFadin
This looks like you've run out of disk. What are your hardware specs?

Patrick

On Thursday, August 25, 2016, Benedict Elliott Smith 
wrote:

> You should update from 2.0 to avoid this behaviour, is the simple answer.
> You are correct that when the commit log gets full the memtables are
> flushed to make room.  2.0 has several interrelated problems here though:
>
> There is a maximum flush queue length property (I cannot recall its name),
> and once there are this many memtables flushing, no more writes can take
> place on the box, whatsoever.  You cannot simply increase this length,
> though, because that shrinks the maximum size of any single memtable (it
> is, iirc, total_memtable_space / (1 + flush_writers + max_queue_length)),
> which worsens write-amplification from compaction.
>
> This is because the memory management for memtables in 2.0 was really
> terrible, and this queue length was used to try to ensure the space
> allocated was not exceeded.
>
> Compounding this, when clearing the commit log 2.0 will flush all
> memtables with data in them regardless of it is useful to do so, meaning
> having more tables (that are actively written to) than your max queue
> length will necessarily cause stalls every time you run out of commit log
> space.
>
> In 2.1, none of these concerns apply.
>
>
> On 24 August 2016 at 23:40, Vasileios Vlachos  > wrote:
>
>> Hello,
>>
>>
>>
>>
>>
>> We have an 8-node cluster spread out in 2 DCs, 4 nodes in each one. We
>> run C* 2.0.17 on Ubuntu 12.04 at the moment.
>>
>>
>>
>>
>> Our C# application often throws logs, which correlate with dropped
>> messages (counter mutations usually) in our logs. We think that if a
>> specific mutaiton stays in the queue for more than 5 seconds, Cassandra
>> drops it. This is also suggested by these lines in system.log:
>>
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) MUTATION messages were dropped in last 5000 ms: 317 for internal
>> timeout and 0 for cross node timeout
>> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
>> (line 912) COUNTER_MUTATION messages were dropped in last 5000 ms: 6 for
>> internal timeout and 0 for cross node timeout
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 55) Pool NameActive   Pending  Completed   Blocked
>>  All Time Blocked
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) ReadStage 0 0  245177190 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
>> 70) RequestResponseStage  0 0 3530334509 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReadRepairStage   0 01549567 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) MutationStage48  1380 2540965500 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
>> 70) ReplicateOnWriteStage 0 0  189615571 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) GossipStage   0 0   20586077 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) CacheCleanupExecutor  0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MigrationStage0 0106 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
>> 70) MemoryMeter   0 0 303029 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) ValidationExecutor0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) FlushWriter   1 5 322604 1
>>  8227
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
>> 70) InternalResponseStage 0 0 35 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) AntiEntropyStage  0 0  0 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
>> 70) MemtablePostFlusher   1 5 424104 0
>> 0
>>  INFO [ScheduledTasks:1] 2016-08-23 

Re: Flush activity and dropped messages

2016-08-25 Thread Benedict Elliott Smith
You should update from 2.0 to avoid this behaviour, is the simple answer.
You are correct that when the commit log gets full the memtables are
flushed to make room.  2.0 has several interrelated problems here though:

There is a maximum flush queue length property (I cannot recall its name),
and once there are this many memtables flushing, no more writes can take
place on the box, whatsoever.  You cannot simply increase this length,
though, because that shrinks the maximum size of any single memtable (it
is, iirc, total_memtable_space / (1 + flush_writers + max_queue_length)),
which worsens write-amplification from compaction.

This is because the memory management for memtables in 2.0 was really
terrible, and this queue length was used to try to ensure the space
allocated was not exceeded.

Compounding this, when clearing the commit log 2.0 will flush all memtables
with data in them regardless of it is useful to do so, meaning having more
tables (that are actively written to) than your max queue length will
necessarily cause stalls every time you run out of commit log space.

In 2.1, none of these concerns apply.


On 24 August 2016 at 23:40, Vasileios Vlachos 
wrote:

> Hello,
>
>
>
>
>
> We have an 8-node cluster spread out in 2 DCs, 4 nodes in each one. We run
> C* 2.0.17 on Ubuntu 12.04 at the moment.
>
>
>
>
> Our C# application often throws logs, which correlate with dropped
> messages (counter mutations usually) in our logs. We think that if a
> specific mutaiton stays in the queue for more than 5 seconds, Cassandra
> drops it. This is also suggested by these lines in system.log:
>
> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
> (line 912) MUTATION messages were dropped in last 5000 ms: 317 for internal
> timeout and 0 for cross node timeout
> ERROR [ScheduledTasks:1] 2016-08-23 13:29:51,454 MessagingService.java
> (line 912) COUNTER_MUTATION messages were dropped in last 5000 ms: 6 for
> internal timeout and 0 for cross node timeout
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
> 55) Pool NameActive   Pending  Completed   Blocked
>  All Time Blocked
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
> 70) ReadStage 0 0  245177190 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,455 StatusLogger.java (line
> 70) RequestResponseStage  0 0 3530334509 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
> 70) ReadRepairStage   0 01549567 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
> 70) MutationStage48  1380 2540965500 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,456 StatusLogger.java (line
> 70) ReplicateOnWriteStage 0 0  189615571 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
> 70) GossipStage   0 0   20586077 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
> 70) CacheCleanupExecutor  0 0  0 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
> 70) MigrationStage0 0106 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,457 StatusLogger.java (line
> 70) MemoryMeter   0 0 303029 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
> 70) ValidationExecutor0 0  0 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
> 70) FlushWriter   1 5 322604 1
>  8227
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,458 StatusLogger.java (line
> 70) InternalResponseStage 0 0 35 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
> 70) AntiEntropyStage  0 0  0 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
> 70) MemtablePostFlusher   1 5 424104 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,459 StatusLogger.java (line
> 70) MiscStage 0 0  0 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23 13:29:51,460 StatusLogger.java (line
> 70) PendingRangeCalculator0 0 37 0
> 0
>  INFO [ScheduledTasks:1] 2016-08-23