[ 
https://issues.apache.org/jira/browse/HADOOP-18675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17704537#comment-17704537
 ] 

ASF GitHub Bot commented on HADOOP-18675:
-----------------------------------------

saxenapranav commented on code in PR #5508:
URL: https://github.com/apache/hadoop/pull/5508#discussion_r1147315190


##########
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CachedSASToken.java:
##########
@@ -114,8 +125,20 @@ private static OffsetDateTime getExpiry(String token) {
     OffsetDateTime seDate = OffsetDateTime.MIN;
     try {
       seDate = OffsetDateTime.parse(seValue, DateTimeFormatter.ISO_DATE_TIME);
-    } catch (DateTimeParseException ex) {
-      LOG.error("Error parsing se query parameter ({}) from SAS.", seValue, 
ex);
+    } catch (DateTimeParseException dateTimeException) {
+      try {
+        TemporalAccessor dt = ISO_DATE_MIDNIGHT.parseBest(seValue, 
OffsetDateTime::from, LocalDateTime::from);
+        if (dt instanceof OffsetDateTime) {
+          seDate = (OffsetDateTime) dt;
+        } else if (dt instanceof LocalDateTime) {
+          seDate = ((LocalDateTime) dt).atOffset(ZoneOffset.UTC);
+        } else {
+          throw dateTimeException;
+        }
+      } catch (DateTimeParseException dateOnlyException) {

Review Comment:
   Can we have something like:
   
   `private static final DateTimeFormatter[] formatters;`
   
   ```
   private OffsetDateTime  getParsedDateTime(String dateTime) {
       for(DateTimeFormatter formatter : formatters) {
         try {
           TemporalAccessor temporalAccessor = formatter.parseBest(dateTime);
           if(temporalAccessor instanceof OffsetDateTime) {
             return (OffsetDateTime) temporalAccessor; 
           }
           if(temporalAccessor instanceof LocalDateTime) {
             return ((LocalDateTime) temporalAccessor).atOffset(ZoneOffset.UTC);
           }
         } catch (DateTimeParseException e) {
           
         }
       }
       return null;
     }
   ```
   
   And from this method we will just call `seDate = getParsedDateTime(seValue);`
   
   This way I feel we can prevent nested code.





> CachedSASToken noisy log errors when SAS token has YYYY-MM-DD expiration
> ------------------------------------------------------------------------
>
>                 Key: HADOOP-18675
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18675
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>            Reporter: Gil Cottle
>            Priority: Major
>              Labels: pull-request-available
>
> Error Description:
> When using SAS tokens with expiration dates in the format YYYY-MM-DD, a 
> frequent error appears in the logs related to the date format. The error 
> expects an ISO_DATE_TIME. See [existing 
> implementation|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/CachedSASToken.java#LL112-L119].
>  The error is noisy in the logs, but does not cause issues. 
> Example stacktrace:
> {code:java}
> 23/03/23 15:40:06 ERROR CachedSASToken: Error parsing se query parameter 
> (2023-11-05) from SAS.
> java.time.format.DateTimeParseException: Text '2023-11-05' could not be 
> parsed at index 10
>       at 
> java.time.format.DateTimeFormatter.parseResolved0(DateTimeFormatter.java:1949)
>       at java.time.format.DateTimeFormatter.parse(DateTimeFormatter.java:1851)
>       at java.time.OffsetDateTime.parse(OffsetDateTime.java:402)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.utils.CachedSASToken.getExpiry(CachedSASToken.java:116)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.utils.CachedSASToken.update(CachedSASToken.java:168)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.AbfsInputStream.readRemote(AbfsInputStream.java:670)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.ReadBufferWorker.lambda$run$0(ReadBufferWorker.java:66)
>       at 
> com.databricks.common.SparkTaskIOMetrics.withTaskIOMetrics(SparkTaskIOMetrics.scala:43)
>       at 
> shaded.databricks.azurebfs.org.apache.hadoop.fs.azurebfs.services.ReadBufferWorker.run(ReadBufferWorker.java:65)
>       at java.lang.Thread.run(Thread.java:750) {code}
> Desired Resolution:
> Expiration code can read YYYY-MM-DD format as well as existing format.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to