otterc commented on code in PR #38959:
URL: https://github.com/apache/spark/pull/38959#discussion_r1063849144
##########
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/BlockFetchingListener.java:
##########
@@ -46,4 +46,7 @@ default void onBlockTransferFailure(String blockId, Throwable
exception) {
default String getTransferType() {
return "fetch";
}
+
+ @Override
+ default void onSaslTimeout() {}
Review Comment:
If we have this in BlockTransferListener, then we don't to re-define it here
##########
core/src/main/scala/org/apache/spark/shuffle/ShuffleBlockPusher.scala:
##########
@@ -251,6 +251,10 @@ private[spark] class ShuffleBlockPusher(conf: SparkConf)
extends Logging {
}
handleResult(PushResult(blockId, exception))
}
+
+ override def onSaslTimeout(): Unit = {
+ TaskContext.get().taskMetrics().incSaslRequestRetries(1)
Review Comment:
In our internal fork, ShuffleBlockPusher is created with the `writeMetrics`.
We can change the constructor to use `taskMetrics`. However, then the concern
is that push is happening outside the task thread so whether the metric is
correct or not. One way to move forward is to make this metric count SASL
retries on when shuffle data is fetched and not pushed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]