[ 
https://issues.apache.org/jira/browse/IMPALA-9128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16972944#comment-16972944
 ] 

ASF subversion and git services commented on IMPALA-9128:
---------------------------------------------------------

Commit d06943c27563cf69ce3a5f739b5f71aba0989ddc in impala's branch 
refs/heads/master from Tim Armstrong
[ https://gitbox.apache.org/repos/asf?p=impala.git;h=d06943c ]

IMPALA-9128: part 1: log on slow data stream RPCs

Allows modifying the threshold for KRPC's server-side slow
RPC logging (which is enabled for all KRPCs).

Added additional logging for data stream RPCs TransmitData
and EndDataStream, and for slow waits that delay the query.

Adds statistics for RPC time to provide some clues if there
are slow data stream RPCs.

I tested this with a low threshold and delays added:
  start-impala-cluster.py \
      --impalad_args=--impala_slow_rpc_threshold_ms=1 \
      --impalad_args=--debug_actions=END_DATA_STREAM_DELAY:JITTER@[email protected]

Example Profile output:
- NetworkThroughput: (Avg: 102.98 MB/sec ; Min: 5.58 MB/sec ; Max: 171.79 
MB/sec ; Number of samples: 296)
- RpcNetworkTime: (Avg: 13.468ms ; Min: 91.309us ; Max: 2s395ms ; Number of 
samples: 299)
- RpcRecvrTime: (Avg: 13.406ms ; Min: 83.160us ; Max: 2s395ms ; Number of 
samples: 299)

Example log output (with log threshold of 1ms):
I1107 14:33:50.487251 24933 krpc-data-stream-sender.cc:363] 
ad4fa70619170ace:b58b2eba00000006] Long delay waiting for RPC to 
127.0.1.1:27000 (fragment_instance_id=ad4fa70619170ace:b58b2eba00000000): took 
451.036ms
I1107 14:33:51.295518 21361 rpcz_store.cc:265] Call 
impala.DataStreamService.EndDataStream from 127.0.0.1:43952 (request call id 
82) took 1259ms. Request Metrics: {}
I1107 14:33:44.843204 21332 krpc-data-stream-sender.cc:342] Slow TransmitData 
RPC to 127.0.1.1:27000 
(fragment_instance_id=ad4fa70619170ace:b58b2eba00000006): took 2.194ms. 
Receiver time: 457.902us Network time: 1.736ms
I1107 14:33:45.139068 21333 krpc-data-stream-sender.cc:342] Slow EndDataStream 
RPC to 127.0.1.1:27001 
(fragment_instance_id=ad4fa70619170ace:b58b2eba00000004): took 61.340ms. 
Receiver time: 81.908us Network time: 61.259ms

Change-Id: I258ac91b9fbbdbc86d0e8091c34f511f8957c4cd
Reviewed-on: http://gerrit.cloudera.org:8080/14662
Reviewed-by: Impala Public Jenkins <[email protected]>
Tested-by: Impala Public Jenkins <[email protected]>


> Improve debugging for slow sends in KrpcDataStreamSender
> --------------------------------------------------------
>
>                 Key: IMPALA-9128
>                 URL: https://issues.apache.org/jira/browse/IMPALA-9128
>             Project: IMPALA
>          Issue Type: Bug
>          Components: Distributed Exec
>            Reporter: Tim Armstrong
>            Assignee: Tim Armstrong
>            Priority: Major
>              Labels: observability
>
> I'm trying to debug a problem that appears to be caused by a slow RPC:
> {noformat}
>     Fragment F00
>       Instance 754fc21ba4744310:d58fd04200000020 (host=xxxxxxxxx)
>         Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:1/120.48 
> MB 
>         - AverageThreadTokens: 1.00 (1.0)
>         - BloomFilterBytes: 0 B (0)
>         - InactiveTotalTime: 0ns (0)
>         - PeakMemoryUsage: 3.2 MiB (3337546)
>         - PeakReservation: 2.0 MiB (2097152)
>         - PeakUsedReservation: 0 B (0)
>         - PerHostPeakMemUsage: 6.7 MiB (6987376)
>         - RowsProduced: 7 (7)
>         - TotalNetworkReceiveTime: 0ns (0)
>         - TotalNetworkSendTime: 3.6m (215354065071)
>         - TotalStorageWaitTime: 4ms (4552708)
>         - TotalThreadsInvoluntaryContextSwitches: 2 (2)
>         - TotalThreadsTotalWallClockTime: 3.6m (215924079474)
>           - TotalThreadsSysTime: 24ms (24386000)
>           - TotalThreadsUserTime: 505ms (505714000)
>         - TotalThreadsVoluntaryContextSwitches: 3,623 (3623)
>         - TotalTime: 3.6m (215801961705)
>         Fragment Instance Lifecycle Event Timeline
>           Prepare Finished: 1ms (1812344)
>           Open Finished: 322ms (322905753)
>           First Batch Produced: 447ms (447050377)
>           First Batch Sent: 447ms (447054546)
>           ExecInternal Finished: 3.6m (215802284852)
>         Buffer pool
>           - AllocTime: 0ns (0)
>           - CumulativeAllocationBytes: 0 B (0)
>           - CumulativeAllocations: 0 (0)
>           - InactiveTotalTime: 0ns (0)
>           - PeakReservation: 0 B (0)
>           - PeakUnpinnedBytes: 0 B (0)
>           - PeakUsedReservation: 0 B (0)
>           - ReadIoBytes: 0 B (0)
>           - ReadIoOps: 0 (0)
>           - ReadIoWaitTime: 0ns (0)
>           - ReservationLimit: 0 B (0)
>           - TotalTime: 0ns (0)
>           - WriteIoBytes: 0 B (0)
>           - WriteIoOps: 0 (0)
>           - WriteIoWaitTime: 0ns (0)
>         Fragment Instance Lifecycle Timings
>           - ExecTime: 3.6m (215479380267)
>             - ExecTreeExecTime: 124ms (124299400)
>           - InactiveTotalTime: 0ns (0)
>           - OpenTime: 321ms (321088906)
>             - ExecTreeOpenTime: 572.04us (572045)
>           - PrepareTime: 1ms (1426412)
>             - ExecTreePrepareTime: 233.32us (233318)
>           - TotalTime: 0ns (0)
>         KrpcDataStreamSender (dst_id=3)
>           - EosSent: 58 (58)
>           - InactiveTotalTime: 3.6m (215354085858)
>           - PeakMemoryUsage: 464.4 KiB (475504)
>           - RowsSent: 7 (7)
>           - RpcFailure: 0 (0)
>           - RpcRetry: 0 (0)
>           - SerializeBatchTime: 99.87us (99867)
>           - TotalBytesSent: 207 B (207)
>           - TotalTime: 3.6m (215355336381)
>           - UncompressedRowBatchSize: 267 B (267)
> {noformat}
> We should add some diagnostics that will allow us to figure out which RPCs 
> are slow and whether there's a pattern about which host is the problem. E.g. 
> maybe we should log if the RPC time exceeds a configured threshold.
> It may also be useful to include some stats about the wait time, e.g. a 
> histogram of the wait times, so that we can see if it's an outlier or general 
> slowness.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to