[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-19 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725603#comment-16725603
 ] 

jiaqiyang commented on KUDU-2638:
-

Yeah!thank you very much!

Now i will built a new kudu cluster with 12 SSD disk Tserver; 

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
> Attachments: kudu16.tc.tablet.png, tserverLog.tar.gz
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-19 Thread Adar Dembo (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725588#comment-16725588
 ] 

Adar Dembo commented on KUDU-2638:
--

bq. 1.one tablet lifecycle from INITIALIZ to RUNNING spent time affected by 
major compact ?

No. A tablet may only be major delta compacted (or minor delta compacted, or 
flushed, or any other MM operation) after it has finished bootstrapping and is 
in the RUNNING state. So, on a per-tablet basis the MM operations are not 
slowing down tablet initialization (though certainly an MM operation on a 
RUNNING tablet can cause an uninitialized tablet to bootstrap more slowly).

bq. 2.by manual trigger major compact can reduce   small block ,but 
compact/flush Op manage by MaintenanceManager

There's no way to manually trigger any MM operation. If you upgrade to Kudu 
1.8, you can take advantage of KUDU-2324 which will let you disable entire 
categories of MM operations. This enabling/disabling can happen at runtime 
(i.e. via the {{kudu tserver set_flag}} CLI utility), so you can start a 
tserver with e.g. all operations disabled, then when your tablets have all 
bootstrapped, you can re-enable them. Note that this is considered unsafe and 
isn't guaranteed not to do something weird. For example, bootstrapping a tablet 
means replaying all of the operations in its write-ahead log, which can result 
in large MemRowSets and DeltaMemStores, which could put the server under memory 
pressure.


> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
> Attachments: kudu16.tc.tablet.png, tserverLog.tar.gz
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-19 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16725550#comment-16725550
 ] 

jiaqiyang commented on KUDU-2638:
-

yes i think so:there is many UPDATEs in my case:

mysql binlog realtime Synchronize to kudu so there is many UPDATEs events

we want to use kudu as a retime OLAP engion
 
 
 
 
 

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
> Attachments: kudu16.tc.tablet.png, tserverLog.tar.gz
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-19 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724859#comment-16724859
 ] 

jiaqiyang commented on KUDU-2638:
-

in this code :
{code:java}
// code placeholder

MaintenanceOp* MaintenanceManager::FindBestOp() {
  TRACE_EVENT0("maintenance", "MaintenanceManager::FindBestOp");

  size_t free_threads = num_threads_ - running_ops_;
  if (free_threads == 0) {
VLOG_AND_TRACE("maintenance", 1) << LogPrefix()
 << "There are no free threads, so we can't 
run anything.";
return nullptr;
  }

  int64_t low_io_most_logs_retained_bytes = 0;
  MaintenanceOp* low_io_most_logs_retained_bytes_op = nullptr;

  uint64_t most_mem_anchored = 0;
  MaintenanceOp* most_mem_anchored_op = nullptr;

  int64_t most_logs_retained_bytes = 0;
  int64_t most_logs_retained_bytes_ram_anchored = 0;
  MaintenanceOp* most_logs_retained_bytes_op = nullptr;

  int64_t most_data_retained_bytes = 0;
  MaintenanceOp* most_data_retained_bytes_op = nullptr;

  double best_perf_improvement = 0;
  MaintenanceOp* best_perf_improvement_op = nullptr;
  for (OpMapTy::value_type  : ops_) {
MaintenanceOp* op(val.first);
MaintenanceOpStats& stats(val.second);
VLOG_WITH_PREFIX(3) << "Considering MM op " << op->name();
// Update op stats.
stats.Clear();
op->UpdateStats();
if (op->cancelled() || !stats.valid() || !stats.runnable()) {
  continue;
}
if (stats.logs_retained_bytes() > low_io_most_logs_retained_bytes &&
op->io_usage() == MaintenanceOp::LOW_IO_USAGE) {
  low_io_most_logs_retained_bytes_op = op;
  low_io_most_logs_retained_bytes = stats.logs_retained_bytes();
  VLOG_AND_TRACE("maintenance", 2) << LogPrefix() << "Op " << op->name() << 
" can free "
   << stats.logs_retained_bytes() << " 
bytes of logs";
}

if (stats.ram_anchored() > most_mem_anchored) {
  most_mem_anchored_op = op;
  most_mem_anchored = stats.ram_anchored();
}
// We prioritize ops that can free more logs, but when it's the same we 
pick the one that
// also frees up the most memory.
if (stats.logs_retained_bytes() > 0 &&
(stats.logs_retained_bytes() > most_logs_retained_bytes ||
(stats.logs_retained_bytes() == most_logs_retained_bytes &&
stats.ram_anchored() > most_logs_retained_bytes_ram_anchored))) 
{
  most_logs_retained_bytes_op = op;
  most_logs_retained_bytes = stats.logs_retained_bytes();
  most_logs_retained_bytes_ram_anchored = stats.ram_anchored();
}

if (stats.data_retained_bytes() > most_data_retained_bytes) {
  most_data_retained_bytes_op = op;
  most_data_retained_bytes = stats.data_retained_bytes();
  VLOG_AND_TRACE("maintenance", 2) << LogPrefix() << "Op " << op->name() << 
" can free "
   << stats.data_retained_bytes() << " 
bytes of data";
}

if ((!best_perf_improvement_op) ||
(stats.perf_improvement() > best_perf_improvement)) {
  best_perf_improvement_op = op;
  best_perf_improvement = stats.perf_improvement();
}
  }

  // Look at ops that we can run quickly that free up log retention.
  if (low_io_most_logs_retained_bytes_op) {
if (low_io_most_logs_retained_bytes > 0) {
  VLOG_AND_TRACE("maintenance", 1) << LogPrefix()
<< "Performing " << 
low_io_most_logs_retained_bytes_op->name() << ", "
<< "because it can free up more logs "
<< "at " << low_io_most_logs_retained_bytes
<< " bytes with a low IO cost";
  return low_io_most_logs_retained_bytes_op;
}
  }

  // Look at free memory. If it is dangerously low, we must select something
  // that frees memory-- the op with the most anchored memory.
  double capacity_pct;
  if (memory_pressure_func_(_pct)) {
if (!most_mem_anchored_op) {
  std::string msg = StringPrintf("we have exceeded our soft memory limit "
  "(current capacity is %.2f%%).  However, there are no ops currently "
  "runnable which would free memory.", capacity_pct);
  LOG_WITH_PREFIX(INFO) << msg;
  return nullptr;
}
VLOG_AND_TRACE("maintenance", 1) << LogPrefix() << "We have exceeded our 
soft memory limit "
<< "(current capacity is " << capacity_pct << "%).  Running the op "
<< "which anchors the most memory: " << 
most_mem_anchored_op->name();
return most_mem_anchored_op;
  }

  if (most_logs_retained_bytes_op &&
  most_logs_retained_bytes / 1024 / 1024 >= 
FLAGS_log_target_replay_size_mb) {
VLOG_AND_TRACE("maintenance", 1) << LogPrefix()
<< "Performing " << most_logs_retained_bytes_op->name() << ", "
<< "because it can free up more logs (" << most_logs_retained_bytes
<< " bytes)";
return most_logs_retained_bytes_op;
  }

  // Look at 

[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-18 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724766#comment-16724766
 ] 

jiaqiyang commented on KUDU-2638:
-

now i'm  analysising  what block the tablet lifecycle;

i think the question in  MaintenanceManager  modle

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
> Attachments: tserverLog.tar.gz
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-18 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16724757#comment-16724757
 ] 

jiaqiyang commented on KUDU-2638:
-

this is the kudu cluster one tserver log

 

[^tserverLog.tar.gz]

 

the cluster total 19 tservers and 3 masters;

12 sata disk every server

200+ tablet on one tserver;

 

there is one question :

why the cluster restart use long time to table avalible , in the log i see that 
boostrap very quickly ;but there is very long time spend on major compact; how 
can we stop the compact then admin compaction after idle time like HBase 
compaction!

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
> Attachments: tserverLog.tar.gz
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-13 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16720962#comment-16720962
 ] 

jiaqiyang commented on KUDU-2638:
-

yes,first thank you very much;

i know that i provide the log is not enough;

thank you for your attention!

i will give out full log for the tserver!

i am very intresting in kudu!

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
> Fix For: n/a
>
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-12 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719839#comment-16719839
 ] 

jiaqiyang commented on KUDU-2638:
-

from the source code i see that :[MaintenanceManager::FindBestOp()]
 * If there's an Op that we can run quickly that frees log retention, we run it.
// - If we've hit the overall process memory limit (note: this includes memory 
that the Ops cannot
// free), we run the Op with the highest RAM usage.
// - If there are Ops that are retaining logs past our target replay size, we 
run the one that has
// the highest retention (and if many qualify, then we run the one that also 
frees up the
// most RAM).
// - Finally, if there's nothing else that we really need to do, we run the Op 
that will improve
// performance the most.

 

i think the op find use the last rule:Finally, if there's nothing else that we 
really need to do, we run the Op that will improve performance the most.

 

if this is true ,the restart cluster when there is many detel data fille will 
cost very long time to avalible the table

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-12 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719818#comment-16719818
 ] 

jiaqiyang commented on KUDU-2638:
-

like this ;

the log i choose one tablet 5aae5dc9e6f4468aaf00c060152d4fed on one tserver;

from all the log i find all the tablet on tserver avilable use 7 hours;

if i stop the compact can the all tablet avalible time will short?

> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-12 Thread Adar Dembo (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719812#comment-16719812
 ] 

Adar Dembo commented on KUDU-2638:
--

Do you mean to say that, on bootstrap, your tablets undergo major delta 
compaction?

Can you attach a tserver log exhibiting the slow startup?


> kudu cluster restart very long time to reused
> -
>
> Key: KUDU-2638
> URL: https://issues.apache.org/jira/browse/KUDU-2638
> Project: Kudu
>  Issue Type: Improvement
>Reporter: jiaqiyang
>Priority: Major
>
> when restart my kudu cluster ;all tablet not avalible:
> run kudu cluster ksck show that:
> Table Summary                                                                 
>                                                                               
>    
> Name | Status | Total Tablets | Healthy | Under-replicated | Unavailable
> +
> t1 | HEALTHY | 1 | 1 | 0 | 0
> t2 | UNAVAILABLE | 5 | 0 | 1 | 4
> t3 | UNAVAILABLE | 6 | 2 | 0 | 4
> t3 | UNAVAILABLE | 3 | 0 | 0 | 3



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KUDU-2638) kudu cluster restart very long time to reused

2018-12-12 Thread jiaqiyang (JIRA)


[ 
https://issues.apache.org/jira/browse/KUDU-2638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719815#comment-16719815
 ] 

jiaqiyang commented on KUDU-2638:
-

{code:java}
// code placeholder

I1121 17:04:53.100796 165214 ts_tablet_manager.cc:909] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Loading 
tablet metadata

I1121 17:07:06.116400 165214 ts_tablet_manager.cc:1082] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Registered 
tablet (data state: TABLET_DATA_READY)

I1121 17:15:29.870625 168167 ts_tablet_manager.cc:932] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: 
Bootstrapping tablet

I1121 17:15:29.870635 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
starting.

I1121 17:16:57.754650 168167 tablet_bootstrap.cc:616] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Time spent 
opening tablet: real 87.881suser 0.908s sys 0.340s

I1121 17:16:59.455792 168167 log.cc:644] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Max segment size reached. Starting new 
segment allocation

I1121 17:16:59.455893 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replayed 1/14 log segments. Stats: ops{read=1614 overwritten=0 applied=1613 
ignored=1476} inserts{seen=65 ignored=0} mutations{seen=423 ignored=0} 
orphaned_commits=1. Pending: 1 replicates

I1121 17:16:59.456018 168167 log.cc:571] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Rolled over to a new log segment at 
/data/data/kudu/tserver-new/wals/5aae5dc9e6f4468aaf00c060152d4fed/wal-2

I1121 17:17:02.018604 168167 log.cc:644] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Max segment size reached. Starting new 
segment allocation

I1121 17:17:02.018836 168167 log.cc:571] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Rolled over to a new log segment at 
/data/data/kudu/tserver-new/wals/5aae5dc9e6f4468aaf00c060152d4fed/wal-3

I1121 17:17:02.018995 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replayed 2/14 log segments. Stats: ops{read=3892 overwritten=0 applied=3891 
ignored=3256} inserts{seen=718 ignored=0} mutations{seen=2327 ignored=0} 
orphaned_commits=1. Pending: 1 replicates

I1121 17:17:03.023664 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replaying log segment 3/14 (2.46M/7.99M this segment, stats: ops{read=4487 
overwritten=0 applied=4487 ignored=3705} inserts{seen=881 ignored=0} 
mutations{seen=2898 ignored=0} orphaned_commits=1)

I1121 17:17:04.08 168167 log.cc:644] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Max segment size reached. Starting new 
segment allocation

I1121 17:17:04.889019 168167 log.cc:571] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Rolled over to a new log segment at 
/data/data/kudu/tserver-new/wals/5aae5dc9e6f4468aaf00c060152d4fed/wal-4

I1121 17:17:04.889173 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replayed 3/14 log segments. Stats: ops{read=5397 overwritten=0 applied=5396 
ignored=4259} inserts{seen=1392 ignored=0} mutations{seen=4399 ignored=0} 
orphaned_commits=1. Pending: 1 replicates

I1121 17:17:07.373458 168167 log.cc:644] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Max segment size reached. Starting new 
segment allocation

I1121 17:17:07.373601 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replayed 4/14 log segments. Stats: ops{read=7365 overwritten=0 applied=7364 
ignored=5769} inserts{seen=1779 ignored=0} mutations{seen=6078 ignored=0} 
orphaned_commits=1. Pending: 1 replicates

I1121 17:17:07.373723 168167 log.cc:571] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Rolled over to a new log segment at 
/data/data/kudu/tserver-new/wals/5aae5dc9e6f4468aaf00c060152d4fed/wal-5

I1121 17:17:08.071877 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: Bootstrap 
replaying log segment 5/14 (2.36M/8.00M this segment, stats: ops{read=7680 
overwritten=0 applied=7680 ignored=5940} inserts{seen=1972 ignored=0} 
mutations{seen=6778 ignored=0} orphaned_commits=1)

I1121 17:17:09.348376 168167 log.cc:644] T 5aae5dc9e6f4468aaf00c060152d4fed P 
510015b8e3d2462e9d52965cfa306af7: Max segment size reached. Starting new 
segment allocation

I1121 17:17:09.348531 168167 tablet_bootstrap.cc:437] T 
5aae5dc9e6f4468aaf00c060152d4fed P 510015b8e3d2462e9d52965cfa306af7: