HeartSaVioR commented on a change in pull request #30366:
URL: https://github.com/apache/spark/pull/30366#discussion_r527284174
##########
File path:
core/src/main/scala/org/apache/spark/deploy/security/HadoopFSDelegationTokenProvider.scala
##########
@@ -126,13 +130,28 @@ private[deploy] class HadoopFSDelegationTokenProvider
Try {
val newExpiration = token.renew(hadoopConf)
val identifier =
token.decodeIdentifier().asInstanceOf[AbstractDelegationTokenIdentifier]
- val interval = newExpiration - identifier.getIssueDate
- logInfo(s"Renewal interval is $interval for token
${token.getKind.toString}")
+ val tokenKind = token.getKind.toString
+ val interval = newExpiration - getIssueDate(tokenKind, identifier)
+ logInfo(s"Renewal interval is $interval for token $tokenKind")
interval
}.toOption
}
if (renewIntervals.isEmpty) None else Some(renewIntervals.min)
}
+
+ private def getIssueDate(kind: String, identifier:
AbstractDelegationTokenIdentifier): Long = {
Review comment:
```
private def getIssueDate(kind: String, identifier:
AbstractDelegationTokenIdentifier): Long = {
val issueDate = identifier.getIssueDate
if (issueDate > 0L) {
issueDate
} else {
val now = System.currentTimeMillis()
logWarning(s"Token $kind has not set up issue date properly.
(provided: $issueDate) " +
s"Using current timestamp ($now) as issue date instead. Consult
token implementor to fix " +
"the behavior.")
now
}
}
```
This doesn't seem to be something we would really want to have tests. Unless
we separate out the calculation logic and try to craft tests, I don't see any
option to have tests for this class.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]