[ 
https://issues.apache.org/jira/browse/HADOOP-18675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704264#comment-17704264
 ] 

ASF GitHub Bot commented on HADOOP-18675:
-----------------------------------------

redcape opened a new pull request, #5508:
URL: https://github.com/apache/hadoop/pull/5508

   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   When using SAS tokens with expiration dates in the format YYYY-MM-DD, a 
frequent error appears in the logs. The cause is that the code expects 
ISO_DATE_TIME, but ISO_DATE type formats are acceptable as well. 
   
   ### How was this patch tested?
   Added tests that the expiration is properly parsed using the previous 
failing format.
   Ran `mvn test -Dtest="TestCachedSASToken"`
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> CachedSASToken noisy log errors when SAS token has YYYY-MM-DD expiration
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-18675
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18675
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>            Reporter: Gil Cottle
>            Priority: Major
>
> Error Description:
> When using SAS tokens with expiration dates in the format YYYY-MM-DD, a 
> frequent error appears in the logs related to the date format. The error 
> expects an ISO_DATE_TIME. See [existing 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CachedSASToken.java#LL112-L119].
>  The error is noisy in the logs, but does not cause issues. 
> Example stacktrace:
> {code:java}
> 23/03/23 15:40:06 ERROR CachedSASToken: Error parsing se query parameter 
> (2023-11-05) from SAS.
> java.time.format.DateTimeParseException: Text '2023-11-05' could not be 
> parsed at index 10
>       at 
> java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1949)
>       at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
>       at java.time.OffsetDateTime.parse(OffsetDateTime.java:402)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.utils.CachedSASToken.getExpiry(CachedSASToken.java:116)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.utils.CachedSASToken.update(CachedSASToken.java:168)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readRemote(AbfsInputStream.java:670)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.ReadBufferWorker.lambda$run$0(ReadBufferWorker.java:66)
>       at 
> com.databricks.common.SparkTaskIOMetrics.withTaskIOMetrics(SparkTaskIOMetrics.scala:43)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.ReadBufferWorker.run(ReadBufferWorker.java:65)
>       at java.lang.Thread.run(Thread.java:750) {code}
> Desired Resolution:
> Expiration code can read YYYY-MM-DD format as well as existing format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to