[jira] [Resolved] (HADOOP-16961) ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)

2020-07-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16961.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

currently in 3.4, though we plan to backport to branch-3.3 once some backport 
conflict is resolvec

> ABFS: Adding metrics to AbfsInputStream (AbfsInputStreamStatistics)
> ---
>
> Key: HADOOP-16961
> URL: https://issues.apache.org/jira/browse/HADOOP-16961
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Gabor Bota
>Assignee: Mehakmeet Singh
>Priority: Major
> Fix For: 3.4.0
>
>
> Adding metrics to AbfsInputStream (AbfsInputStreamStatistics) can improve the 
> testing and diagnostics of the connector.
> Also adding some logging.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17112) whitespace not allowed in paths when saving files to s3a via committer

2020-07-03 Thread Krzysztof Adamski (Jira)
Krzysztof Adamski created HADOOP-17112:
--

 Summary: whitespace not allowed in paths when saving files to s3a 
via committer
 Key: HADOOP-17112
 URL: https://issues.apache.org/jira/browse/HADOOP-17112
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.2.0
Reporter: Krzysztof Adamski
 Attachments: image-2020-07-03-16-08-52-340.png

When saving results through spark dataframe on latest 3.0.1-snapshot compiled 
against hadoop-3.2 with the following specs
--conf 
spark.hadoop.mapreduce.outputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.S3ACommitterFactory
  
--conf 
spark.sql.parquet.output.committer.class=org.apache.spark.internal.io.cloud.BindingParquetOutputCommitter
 
--conf 
spark.sql.sources.commitProtocolClass=org.apache.spark.internal.io.cloud.PathOutputCommitProtocol
 
--conf spark.hadoop.fs.s3a.committer.name=partitioned 
--conf spark.hadoop.fs.s3a.committer.staging.conflict-mode=replace 
we are unable to save the file with whitespace character in the path. It works 
fine without.

I was looking into the recent commits with regards to qualifying the path, but 
couldn't find anything obvious. Is this a known bug?

!image-2020-07-03-16-08-15-852.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17086) ABFS: Fix the parsing errors in ABFS Driver with creation Time (being returned in ListPath)

2020-07-03 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17086.
-
Fix Version/s: (was: 3.4.0)
   3.3.1
   Resolution: Fixed

> ABFS: Fix the parsing errors in ABFS Driver with creation Time (being 
> returned in ListPath)
> ---
>
> Key: HADOOP-17086
> URL: https://issues.apache.org/jira/browse/HADOOP-17086
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Ishani
>Assignee: Bilahari T H
>Priority: Major
> Fix For: 3.3.1
>
>
> I am seeing errors while running ABFS Driver against stg75 build in canary. 
> This is related to parsing errors as we receive creationTIme in the ListPath 
> API. Here are the errors:
> RestVersion: 2020-02-10
>  mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify 
> -Dit.test=ITestAzureBlobFileSystemRenameUnicode
> [ERROR] 
> testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode)
>   Time elapsed: 852.083 s  <<< ERROR!
> Status code: -1 error code: null error message: 
> InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException:
>  Unrecognized field "creationTime" (Class org.apache.hadoop.
> .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188)
>     at 
> org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735)
>     at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373)
>     at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.base/java.lang.reflect.Method.invoke(Method.java:566)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>     at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>     at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>     at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>     at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
>     at java.base/java.lang.Thread.run(Thread.java:834)
> Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: 
> Unrecognized field "creationTime" (Class 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not 
> marked as i
> orable
>  at [Source: 
> [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048]
>  (through reference chain: 
> org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat
> "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"])
>     at 
> org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyExc

Apache Hadoop qbt Report: trunk+JDK8 on Linux/x86_64

2020-07-03 Thread Apache Jenkins Server
For more details, see 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/192/

No changes


[Error replacing 'FILE' - Workspace is not accessible]

-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org