HeartSaVioR edited a comment on pull request #30366:
URL: https://github.com/apache/spark/pull/30366#issuecomment-727107472


   @tgravescs 
   
   Thanks for the comment.
   
   > I thought the way we did it was just got the earliest renewal so we didn't 
have to track all them individually because the common case is renewal only 
happens in hours time frame - like once every 24 hours but its obviously 
configurable. 
   
   Yes that's configurable but agree that it only happens in hours time frame. 
The problematic token also produced expired time as 7 days later, so even 
longer than the Hadoop delegation token. The problem is simply just because 
Spark has a belief that token identifier should have valid issue date, whereas 
it's not "guaranteed" for every implementations. Once the precondition is 
broken, the calculation is completely going on unexpected way. There's 
safeguard but the safeguard also makes thing bad.
   
   That's why I had to change the approach. Before reaching this approach, I 
just fixed the issue via simply discarding the invalid next renewal date like 
less than now, but that really feels me just adding a workaround. (This 
workaround also can't handle the new case I left in above comment.) And given 
we're here, I'd like to collect some ideas to make this "concrete" with 
tolerating known case of broken assumption.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to