anujphadke has posted comments on this change.

Change subject: Impala-3342: Adding thread counters to measure time spent 
during plan fragment execution
......................................................................


Patch Set 2:

(6 comments)

http://gerrit.cloudera.org:8080/#/c/4633/2//COMMIT_MSG
Commit Message:

PS2, Line 7: Impala-3342 Adding thread counters to measure time spent during 
plan
           : fragment execution
> Please fix the formatting of the msg.
Done


PS2, Line 11: meausure
> measure
Done


PS2, Line 13: hdfs/kudu scanner and in a blocking join
> Why is this worth calling out? Doesn't this measure all exec nodes?
This change replaces every instance of the total_cpu_timer. Replacing this 
timer in those 2 places and adding the THREAD_COUNTERS which get aggregated in 
the plan-fragment-executor. 
Adding thread counters in every thread would bulk up the profile.


Line 14: 
> What does the profile look like? Would be helpful to see what the plan frag
Fragment F01:
      Instance c64650f62b67d849:ff790c3600000002 
(host=anuj-OptiPlex-9020:22000):(Total: 31.008ms, non-child: 0.000ns, % 
non-child: 0.00%)
        Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:29/3.60 GB 
        MemoryUsage(500.000ms): 57.78 MB, 129.77 MB, 137.79 MB, 129.79 MB, 
129.79 MB, 121.79 MB, 153.78 MB, 121.79 MB, 129.79 MB, 129.79 MB, 129.79 MB, 
129.79 MB, 97.79 MB, 137.79 MB, 137.79 MB, 129.79 MB, 121.79 MB, 105.79 MB, 
105.89 MB, 121.79 MB, 129.79 MB, 121.79 MB, 137.79 MB, 113.79 MB, 113.79 MB, 
137.79 MB, 121.79 MB, 129.79 MB, 129.79 MB, 113.79 MB, 105.86 MB, 97.76 MB, 
113.76 MB, 105.76 MB, 73.72 MB
        ThreadUsage(500.000ms): 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 
6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 6, 5, 5, 5, 4
         - AverageThreadTokens: 5.86 
         - BloomFilterBytes: 0
         - PeakMemoryUsage: 169.83 MB (178079056)
         - PerHostPeakMemUsage: 1.17 GB (1256325736)
         - PlanFragmentThreadInvoluntaryContextSwitches: 4.32K (4321)
         - PlanFragmentThreadTotalWallClockTime: 1m42s
           - PlanFragmentThreadSysTime: 206.195ms
           - PlanFragmentThreadUserTime: 41s769ms
         - PlanFragmentThreadVoluntaryContextSwitches: 35.57K (35570)
         - PrepareTime: 30.293ms
         - RowsProduced: 30.00M (29999795)
         - TotalNetworkReceiveTime: 0.000ns
         - TotalNetworkSendTime: 2s396ms
         - TotalStorageWaitTime: 920.765ms
        CodeGen:(Total: 41.328ms, non-child: 41.328ms, % non-child: 100.00%)
           - CodegenTime: 518.571us
           - CompileTime: 3.161ms
           - LoadTime: 0.000ns
           - ModuleBitcodeSize: 1.90 MB (1996720)
           - NumFunctions: 9 (9)
           - NumInstructions: 113 (113)
           - OptimizationTime: 7.696ms
           - PrepareTime: 30.095ms
        DataStreamSender (dst_id=5):(Total: 16s230ms, non-child: 16s230ms, % 
non-child: 100.00%)
           - BytesSent: 428.18 MB (448979535)
           - NetworkThroughput(*): 236.47 MB/sec
           - OverallThroughput: 26.38 MB/sec


http://gerrit.cloudera.org:8080/#/c/4633/2/be/src/runtime/plan-fragment-executor.cc
File be/src/runtime/plan-fragment-executor.cc:

Line 213:       ADD_COUNTER(profile(), PER_HOST_PEAK_MEM_COUNTER, TUnit::BYTES);
> Let's create the new counter here along with the other ones so that the set
Done


http://gerrit.cloudera.org:8080/#/c/4633/2/be/src/runtime/runtime-state.h
File be/src/runtime/runtime-state.h:

Line 241:   ///Fragment thread counters
> The comment isn't too helpful. Maybe:
Done


-- 
To view, visit http://gerrit.cloudera.org:8080/4633
To unsubscribe, visit http://gerrit.cloudera.org:8080/settings

Gerrit-MessageType: comment
Gerrit-Change-Id: Ifa88aa6f3371fa42d11ecc122f43c7d83623c300
Gerrit-PatchSet: 2
Gerrit-Project: Impala-ASF
Gerrit-Branch: master
Gerrit-Owner: anujphadke <apha...@cloudera.com>
Gerrit-Reviewer: Henry Robinson <he...@cloudera.com>
Gerrit-Reviewer: Tim Armstrong <tarmstr...@cloudera.com>
Gerrit-Reviewer: Yonghyun Hwang
Gerrit-Reviewer: anujphadke <apha...@cloudera.com>
Gerrit-HasComments: Yes

Reply via email to