[kudu-CR] maintenance manager: log the reason for scheduling each operation
Todd Lipcon has submitted this change and it was merged. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. maintenance_manager: log the reason for scheduling each operation This makes it easier to troubleshoot when the maintenance manager appears to be scheduling the "wrong" operation. Example output from a long run of full_stack_insert_scan-test: ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.30% used, can flush 641875735 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 386079935 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 29857788 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.01% used, can flush 637714199 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 394543122 bytes of WAL ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.281697 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.280992 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.280256 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.060532 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.060298 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 56855045 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.054961 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 7202893 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.25% used, can flush 633552663 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 394836440 bytes of WAL ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.192819 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.184820 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.184674 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 70881575 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.127476 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 18119656 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.127334 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.111677 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 31714242 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.06% used, can flush 624189207 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 377818508 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 29069301 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.144540 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.138843 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 36867458 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.138827 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.122173 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 31717417 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.121637 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.121104 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.088980 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.087296 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 54371475 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.055906 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 9063001 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.031908 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 6840843 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.08% used, can flush 614825751 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 377913596 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 28612520 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.113092 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.110844 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 368
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Will Berkeley has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 2: Code-Review+2 -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 2 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Mon, 05 Mar 2018 18:08:40 + Gerrit-HasComments: No
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Todd Lipcon has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 2: Any thoughts on this? Would like to get it in to 1.7 -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 2 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Sun, 04 Mar 2018 18:18:07 + Gerrit-HasComments: No
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Todd Lipcon has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 1: OK, I made things a bit more concise, let me know how this looks -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Sat, 24 Feb 2018 01:47:06 + Gerrit-HasComments: No
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Hello Will Berkeley, Jean-Daniel Cryans, Kudu Jenkins, I'd like you to reexamine a change. Please visit http://gerrit.cloudera.org:8080/9172 to look at the new patch set (#2). Change subject: maintenance_manager: log the reason for scheduling each operation .. maintenance_manager: log the reason for scheduling each operation This makes it easier to troubleshoot when the maintenance manager appears to be scheduling the "wrong" operation. Example output from a long run of full_stack_insert_scan-test: ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.30% used, can flush 641875735 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 386079935 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 29857788 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.01% used, can flush 637714199 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 394543122 bytes of WAL ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.281697 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.280992 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.280256 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.060532 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.060298 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 56855045 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.054961 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 7202893 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.25% used, can flush 633552663 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 394836440 bytes of WAL ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.192819 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.184820 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.184674 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 70881575 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.127476 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 18119656 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.127334 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.111677 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 31714242 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.06% used, can flush 624189207 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 377818508 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 29069301 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.144540 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.138843 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 36867458 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.138827 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.122173 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 31717417 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.121637 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.121104 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.088980 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.087296 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 54371475 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.055906 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 9063001 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.031908 ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 6840843 bytes on disk ...: Scheduling FlushMRSOp(93ded54b4dfb4e1586ff7fe700184f53): under memory pressure (60.08% used, can flush 614825751 bytes) ...: Scheduling LogGCOp(93ded54b4dfb4e1586ff7fe700184f53): free 377913596 bytes of WAL ...: Scheduling UndoDeltaBlockGCOp(93ded54b4dfb4e1586ff7fe700184f53): 28612520 bytes on disk ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf score=0.113092 ...: Scheduling CompactRowSetsOp(93ded54b4dfb4e1586ff7fe700184f53): perf scor
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Will Berkeley has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 1: (1 comment) http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc File src/kudu/util/maintenance_manager.cc: http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc@444 PS1, Line 444: best performance improvement > It's not just this log that worries me but it's the one that worries me the +1 to making the messages concise, but we should include the memory % in the memory pressure messages, since it's actually hard to know from the logs w/o seeing rejections. -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Fri, 02 Feb 2018 21:25:22 + Gerrit-HasComments: Yes
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Jean-Daniel Cryans has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 1: (1 comment) http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc File src/kudu/util/maintenance_manager.cc: http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc@444 PS1, Line 444: best performance improvement > Any suggestions what to use instead? We already use "perf improvement" on t It's not just this log that worries me but it's the one that worries me the most. Maybe just making them less verbose: log gc (x bytes) memory pressure (x bytes) data gc (x bytes) perf improvement (score=blah) But good point about perf improvement being already used in the UI. -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Wed, 31 Jan 2018 23:43:49 + Gerrit-HasComments: Yes
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Todd Lipcon has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 1: (1 comment) http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc File src/kudu/util/maintenance_manager.cc: http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc@444 PS1, Line 444: best performance improvement > I'm worried what random users might read into this kind of log line. Any suggestions what to use instead? We already use "perf improvement" on the MM web UI. -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Todd Lipcon Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Wed, 31 Jan 2018 23:31:51 + Gerrit-HasComments: Yes
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Jean-Daniel Cryans has posted comments on this change. ( http://gerrit.cloudera.org:8080/9172 ) Change subject: maintenance_manager: log the reason for scheduling each operation .. Patch Set 1: (1 comment) http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc File src/kudu/util/maintenance_manager.cc: http://gerrit.cloudera.org:8080/#/c/9172/1/src/kudu/util/maintenance_manager.cc@444 PS1, Line 444: best performance improvement I'm worried what random users might read into this kind of log line. Also, would it make sense to print something related to the priority order instead of having to lookup the code? Maybe we'd still have to look at the code to see what the priority means. Just a thought. -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: comment Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Kudu Jenkins Gerrit-Reviewer: Will Berkeley Gerrit-Comment-Date: Wed, 31 Jan 2018 23:23:05 + Gerrit-HasComments: Yes
[kudu-CR] maintenance manager: log the reason for scheduling each operation
Hello Will Berkeley, Jean-Daniel Cryans, I'd like you to do a code review. Please visit http://gerrit.cloudera.org:8080/9172 to review the following change. Change subject: maintenance_manager: log the reason for scheduling each operation .. maintenance_manager: log the reason for scheduling each operation This makes it easier to troubleshoot when the maintenance manager appears to be scheduling the "wrong" operation. Example output from a long run of full_stack_insert_scan-test: I0131 15:00:21.154744 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling FlushMRSOp(eaa9b5237a14425e852ca97c2b4ae138): under memory pressure (60.52% used), running op which anchors most memory (642916119 bytes) I0131 15:00:24.241927 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling LogGCOp(eaa9b5237a14425e852ca97c2b4ae138): can GC 394382000 bytes of logs with low IO cost I0131 15:00:24.309976 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling UndoDeltaBlockGCOp(eaa9b5237a14425e852ca97c2b4ae138): can free up the most data on disk (29912621 bytes) I0131 15:00:24.310154 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling CompactRowSetsOp(eaa9b5237a14425e852ca97c2b4ae138): best performance improvement (score=0.281733) I0131 15:00:26.482787 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling CompactRowSetsOp(eaa9b5237a14425e852ca97c2b4ae138): best performance improvement (score=0.281354) I0131 15:00:28.514597 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling UndoDeltaBlockGCOp(eaa9b5237a14425e852ca97c2b4ae138): can free up the most data on disk (36234163 bytes) I0131 15:00:28.514787 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling CompactRowSetsOp(eaa9b5237a14425e852ca97c2b4ae138): best performance improvement (score=0.204581) I0131 15:00:30.165297 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling CompactRowSetsOp(eaa9b5237a14425e852ca97c2b4ae138): best performance improvement (score=0.150718) I0131 15:00:31.783936 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling CompactRowSetsOp(eaa9b5237a14425e852ca97c2b4ae138): best performance improvement (score=0.024950) I0131 15:00:32.740442 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling UndoDeltaBlockGCOp(eaa9b5237a14425e852ca97c2b4ae138): can free up the most data on disk (38786747 bytes) I0131 15:00:36.495453 13887 maintenance_manager.cc:300] P e96b0ba97e104a4294feef1163f7383f: Scheduling FlushMRSOp(eaa9b5237a14425e852ca97c2b4ae138): under memory pressure (60.45% used), running op which anchors most memory (637714199 bytes) Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 --- M src/kudu/util/debug/trace_logging.h M src/kudu/util/maintenance_manager-test.cc M src/kudu/util/maintenance_manager.cc M src/kudu/util/maintenance_manager.h 4 files changed, 74 insertions(+), 50 deletions(-) git pull ssh://gerrit.cloudera.org:29418/kudu refs/changes/72/9172/1 -- To view, visit http://gerrit.cloudera.org:8080/9172 To unsubscribe, visit http://gerrit.cloudera.org:8080/settings Gerrit-Project: kudu Gerrit-Branch: master Gerrit-MessageType: newchange Gerrit-Change-Id: I4dcdb863a7a0b0fc2a72757801d5c057fa725c34 Gerrit-Change-Number: 9172 Gerrit-PatchSet: 1 Gerrit-Owner: Todd Lipcon Gerrit-Reviewer: Jean-Daniel Cryans Gerrit-Reviewer: Will Berkeley