[jira] [Commented] (HADOOP-16862) [JDK11] Support JavaDoc
[ https://issues.apache.org/jira/browse/HADOOP-16862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17153223#comment-17153223 ] Ishani commented on HADOOP-16862: - [https://github.com/apache/hadoop/pull/2125] > [JDK11] Support JavaDoc > --- > > Key: HADOOP-16862 > URL: https://issues.apache.org/jira/browse/HADOOP-16862 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Major > > This issue is to run {{mvn javadoc:javadoc}} successfully in Apache Hadoop > with Java 11. > Now there are many errors. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Description: When the new RestVersion(2019-12-12) is enabled in the backend, enable that in the driver along with the documentation for the appendblob.key config values which are possible with the new RestVersion. Configs: fs.azure.appendblob.directories was: When the new RestVersion(2019-02-10) is enabled in the backend, enable that in the driver along with the documentation for the appendblob.key config values which are possible with the new RestVersion. Configs: fs.azure.enable.appendwithflush fs.azure.appendblob.key > ABFS: Enable new Rest Version and add documentation for appendblob > -- > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-12-12) is enabled in the backend, enable that > in the driver along with the documentation for the appendblob.key config > values which are possible with the new RestVersion. > Configs: > fs.azure.appendblob.directories > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Summary: ABFS: Enable new Rest Version and add documentation for appendblob (was: ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.) > ABFS: Enable new Rest Version and add documentation for appendblob > -- > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-02-10) is enabled in the backend, enable that > in the driver along with the documentation for the appendblob.key config > values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-17058) Support for Appendblob in abfs driver
[ https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani resolved HADOOP-17058. - Resolution: Fixed > Support for Appendblob in abfs driver > - > > Key: HADOOP-17058 > URL: https://issues.apache.org/jira/browse/HADOOP-17058 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16818) ABFS: Combine append+flush calls for blockblob & appendblob
[ https://issues.apache.org/jira/browse/HADOOP-16818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani resolved HADOOP-16818. - Release Note: it was decided to drop the usage of feature/API in the driver. (Combined Calls). There is a separate JIRA for support of appendblob. Resolution: Won't Fix > ABFS: Combine append+flush calls for blockblob & appendblob > > > Key: HADOOP-16818 > URL: https://issues.apache.org/jira/browse/HADOOP-16818 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Bilahari T H >Assignee: Ishani >Priority: Minor > > Combine append+flush calls for blockblob & appendblob -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17086) Parsing errors in ABFS Driver with creation Time (being returned in ListPath)
[ https://issues.apache.org/jira/browse/HADOOP-17086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-17086: Description: I am seeing errors while running ABFS Driver against stg75 build in canary. This is related to parsing errors as we receive creationTIme in the ListPath API. Here are the errors: RestVersion: 2020-02-10 mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify -Dit.test=ITestAzureBlobFileSystemRenameUnicode [ERROR] testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode) Time elapsed: 852.083 s <<< ERROR! Status code: -1 error code: null error message: InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop. .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373) at org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not marked as i orable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796];%20line:%201,%20column:%2048] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53) at org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:267) at org.codehaus.jackson.map.deser.std.StdDeserializer.reportUnknownProperty(StdDeserializer.java:673) at org.codehaus.jackson.map.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:659) at org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:1365) at org.codehaus.jackson.map.deser.BeanDeserializer._handleUnknown(BeanDeserializer.java:725) at org.codehaus.jackson.map.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:703) at
[jira] [Created] (HADOOP-17086) Parsing errors in ABFS Driver with creation Time (being returned in ListPath)
Ishani created HADOOP-17086: --- Summary: Parsing errors in ABFS Driver with creation Time (being returned in ListPath) Key: HADOOP-17086 URL: https://issues.apache.org/jira/browse/HADOOP-17086 Project: Hadoop Common Issue Type: Sub-task Reporter: Ishani I am seeing errors while running ABFS Driver against stg75 build in canary. This is related to parsing errors as we receive creationTIme in the ListPath API. Here are the errors: mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify -Dit.test=ITestAzureBlobFileSystemRenameUnicode [ERROR] testRenameFileUsingUnicode[0](org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode) Time elapsed: 852.083 s <<< ERROR! Status code: -1 error code: null error message: InvalidAbfsRestOperationExceptionorg.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop. .azurebfs.contracts.services.ListResultEntrySchema), not marked as ignorable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796]; line: 1, column: 48] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.executeHttpOperation(AbfsRestOperation.java:273) at org.apache.hadoop.fs.azurebfs.services.AbfsRestOperation.execute(AbfsRestOperation.java:188) at org.apache.hadoop.fs.azurebfs.services.AbfsClient.listPath(AbfsClient.java:237) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:773) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystemStore.listStatus(AzureBlobFileSystemStore.java:735) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.listStatus(AzureBlobFileSystem.java:373) at org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemRenameUnicode.testRenameFileUsingUnicode(ITestAzureBlobFileSystemRenameUnicode.java:92) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298) at org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.codehaus.jackson.map.exc.UnrecognizedPropertyException: Unrecognized field "creationTime" (Class org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema), not marked as i orable at [Source: [sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796|mailto:sun.net.www.protocol.http.HttpURLConnection$HttpInputStream@49e30796]; line: 1, column: 48] (through reference chain: org.apache.hadoop.fs.azurebfs.contracts.services.ListResultSchema["pat "]->org.apache.hadoop.fs.azurebfs.contracts.services.ListResultEntrySchema["creationTime"]) at org.codehaus.jackson.map.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:53) at org.codehaus.jackson.map.deser.StdDeserializationContext.unknownFieldException(StdDeserializationContext.java:267) at org.codehaus.jackson.map.deser.std.StdDeserializer.reportUnknownProperty(StdDeserializer.java:673) at org.codehaus.jackson.map.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:659) at org.codehaus.jackson.map.deser.BeanDeserializer.handleUnknownProperty(BeanDeserializer.java:1365) at org.codehaus.jackson.map.deser.BeanDeserializer._handleUnknown(BeanDeserializer.java:725) at
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Description: When the new RestVersion(2019-02-10) is enabled in the backend, enable that in the driver along with the documentation for the appendblob.key config values which are possible with the new RestVersion. Configs: fs.azure.enable.appendwithflush fs.azure.appendblob.key was: When the new RestVersion(2019-12-12) is enabled in the backend, enable that in the driver along with the documentation for the appendWithFlush config and appendblob.key config values which are possible with the new RestVersion. Configs: fs.azure.enable.appendwithflush fs.azure.appendblob.key > ABFS: Enable new Rest Version and add documentation for appendblob and > appendWIthFlush config parameters. > - > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-02-10) is enabled in the backend, enable that > in the driver along with the documentation for the appendblob.key config > values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17085) javadoc failing in the yetus report with the latest trunk
[ https://issues.apache.org/jira/browse/HADOOP-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17143497#comment-17143497 ] Ishani commented on HADOOP-17085: - [https://github.com/apache/hadoop/pull/2072] > javadoc failing in the yetus report with the latest trunk > - > > Key: HADOOP-17085 > URL: https://issues.apache.org/jira/browse/HADOOP-17085 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, yetus >Reporter: Ishani >Priority: Major > > javadoc is failing in the latest yetus report on trunk. below is a report > from an empty PR where it is failing. > > *-1 overall* > ||Vote||Subsystem||Runtime||Comment|| > |+0 |reexec|26m 14s|Docker mode activated.| > | | |_ Prechecks _| | > |+1 |dupname|0m 0s|No case conflicting files found.| > |+1 |[@author|https://github.com/author]|0m 0s|The patch does not contain > any [@author|https://github.com/author] tags.| > |-1 ❌|test4tests|0m 0s|The patch doesn't appear to include any new or > modified tests. Please justify why no new tests are needed for this patch. > Also please list what manual steps were performed to verify this patch.| > | | |_ trunk Compile Tests _| | > |+1 |mvninstall|23m 14s|trunk passed| > |+1 |compile|0m 46s|trunk passed with JDK > Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04| > |+1 |compile|0m 30s|trunk passed with JDK Private > Build-1.8.0_252-8u252-b09-1~18.04-b09| > |+1 |checkstyle|0m 24s|trunk passed| > |+1 |mvnsite|0m 35s|trunk passed| > |+1 |shadedclient|16m 53s|branch has no errors when building and testing our > client artifacts.| > |-1 ❌|javadoc|0m 25s|hadoop-azure in trunk failed with JDK > Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.| > |+1 |javadoc|0m 23s|trunk passed with JDK Private > Build-1.8.0_252-8u252-b09-1~18.04-b09| > |+0 |spotbugs|0m 53s|Used deprecated FindBugs config; considering switching > to SpotBugs.| > |+1 |findbugs|0m 51s|trunk passed| > | | |_ Patch Compile Tests _| | > |+1 |mvninstall|0m 27s|the patch passed| > |+1 |compile|0m 27s|the patch passed with JDK > Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04| > |+1 |javac|0m 27s|the patch passed| > |+1 |compile|0m 22s|the patch passed with JDK Private > Build-1.8.0_252-8u252-b09-1~18.04-b09| > |+1 |javac|0m 22s|the patch passed| > |+1 |checkstyle|0m 15s|the patch passed| > |+1 |mvnsite|0m 24s|the patch passed| > |+1 |whitespace|0m 0s|The patch has no whitespace issues.| > |+1 |shadedclient|15m 29s|patch has no errors when building and testing our > client artifacts.| > |-1 ❌|javadoc|0m 22s|hadoop-azure in the patch failed with JDK > Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.| > |+1 |javadoc|0m 20s|the patch passed with JDK Private > Build-1.8.0_252-8u252-b09-1~18.04-b09| > |+1 |findbugs|0m 53s|the patch passed| > | | |_ Other Tests _| | > |+1 |unit|1m 19s|hadoop-azure in the patch passed.| > |+1 |asflicense|0m 28s|The patch does not generate ASF License warnings.| > | | |92m 45s| | > ||Subsystem||Report/Notes|| > |Docker|ClientAPI=1.40 ServerAPI=1.40 base: > [https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile]| > |GITHUB PR|[#2091|https://github.com/apache/hadoop/pull/2091]| > |Optional Tests|dupname asflicense compile javac javadoc mvninstall mvnsite > unit shadedclient findbugs checkstyle| > |uname|Linux ddd84b65f91e 4.15.0-101-generic > [#102|https://github.com/apache/hadoop/pull/102]-Ubuntu SMP Mon May 11 > 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux| > |Build tool|maven| > |Personality|personality/hadoop.sh| > |git revision|trunk / > [{{7c02d18}}|https://github.com/apache/hadoop/commit/7c02d1889bbeabc73c95a4c83f0cd204365ff410]| > |Default Java|Private Build-1.8.0_252-8u252-b09-1~18.04-b09| > |Multi-JDK > versions|/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 > /usr/lib/jvm/java-8-openjdk-amd64:Private > Build-1.8.0_252-8u252-b09-1~18.04-b09| > |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]| > |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]| > |Test > Results|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/]| > |Max. process+thread count|308 (vs. ulimit of 5500)| > |modules|C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure| > |Console > output|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console]| > |versions|git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1| > |Powered by|Apache Yetus 0.12.0 > [https://yetus.apache.org|https://yetus.apache.org/]| > This message was automatically generated. -- This message was
[jira] [Created] (HADOOP-17085) javadoc failing in the yetus report with the latest trunk
Ishani created HADOOP-17085: --- Summary: javadoc failing in the yetus report with the latest trunk Key: HADOOP-17085 URL: https://issues.apache.org/jira/browse/HADOOP-17085 Project: Hadoop Common Issue Type: Sub-task Components: build, yetus Reporter: Ishani javadoc is failing in the latest yetus report on trunk. below is a report from an empty PR where it is failing. *-1 overall* ||Vote||Subsystem||Runtime||Comment|| |+0 |reexec|26m 14s|Docker mode activated.| | | |_ Prechecks _| | |+1 |dupname|0m 0s|No case conflicting files found.| |+1 |[@author|https://github.com/author]|0m 0s|The patch does not contain any [@author|https://github.com/author] tags.| |-1 ❌|test4tests|0m 0s|The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.| | | |_ trunk Compile Tests _| | |+1 |mvninstall|23m 14s|trunk passed| |+1 |compile|0m 46s|trunk passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04| |+1 |compile|0m 30s|trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |+1 |checkstyle|0m 24s|trunk passed| |+1 |mvnsite|0m 35s|trunk passed| |+1 |shadedclient|16m 53s|branch has no errors when building and testing our client artifacts.| |-1 ❌|javadoc|0m 25s|hadoop-azure in trunk failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.| |+1 |javadoc|0m 23s|trunk passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |+0 |spotbugs|0m 53s|Used deprecated FindBugs config; considering switching to SpotBugs.| |+1 |findbugs|0m 51s|trunk passed| | | |_ Patch Compile Tests _| | |+1 |mvninstall|0m 27s|the patch passed| |+1 |compile|0m 27s|the patch passed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04| |+1 |javac|0m 27s|the patch passed| |+1 |compile|0m 22s|the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |+1 |javac|0m 22s|the patch passed| |+1 |checkstyle|0m 15s|the patch passed| |+1 |mvnsite|0m 24s|the patch passed| |+1 |whitespace|0m 0s|The patch has no whitespace issues.| |+1 |shadedclient|15m 29s|patch has no errors when building and testing our client artifacts.| |-1 ❌|javadoc|0m 22s|hadoop-azure in the patch failed with JDK Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.| |+1 |javadoc|0m 20s|the patch passed with JDK Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |+1 |findbugs|0m 53s|the patch passed| | | |_ Other Tests _| | |+1 |unit|1m 19s|hadoop-azure in the patch passed.| |+1 |asflicense|0m 28s|The patch does not generate ASF License warnings.| | | |92m 45s| | ||Subsystem||Report/Notes|| |Docker|ClientAPI=1.40 ServerAPI=1.40 base: [https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/Dockerfile]| |GITHUB PR|[#2091|https://github.com/apache/hadoop/pull/2091]| |Optional Tests|dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle| |uname|Linux ddd84b65f91e 4.15.0-101-generic [#102|https://github.com/apache/hadoop/pull/102]-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux| |Build tool|maven| |Personality|personality/hadoop.sh| |git revision|trunk / [{{7c02d18}}|https://github.com/apache/hadoop/commit/7c02d1889bbeabc73c95a4c83f0cd204365ff410]| |Default Java|Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |Multi-JDK versions|/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_252-8u252-b09-1~18.04-b09| |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/branch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]| |javadoc|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.7+10-post-Ubuntu-2ubuntu218.04.txt]| |Test Results|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/testReport/]| |Max. process+thread count|308 (vs. ulimit of 5500)| |modules|C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure| |Console output|[https://builds.apache.org/job/hadoop-multibranch/job/PR-2091/1/console]| |versions|git=2.17.1 maven=3.6.0 findbugs=3.1.0-RC1| |Powered by|Apache Yetus 0.12.0 [https://yetus.apache.org|https://yetus.apache.org/]| This message was automatically generated. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17058) Support for Appendblob in abfs driver
[ https://issues.apache.org/jira/browse/HADOOP-17058?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani reassigned HADOOP-17058: --- Assignee: Ishani > Support for Appendblob in abfs driver > - > > Key: HADOOP-17058 > URL: https://issues.apache.org/jira/browse/HADOOP-17058 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17058) Support for Appendblob in abfs driver
Ishani created HADOOP-17058: --- Summary: Support for Appendblob in abfs driver Key: HADOOP-17058 URL: https://issues.apache.org/jira/browse/HADOOP-17058 Project: Hadoop Common Issue Type: Sub-task Affects Versions: 3.3.0 Reporter: Ishani add changes to support appendblob in the hadoop-azure abfs driver. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Fix Version/s: 3.3.0 > ABFS: Enable new Rest Version and add documentation for appendblob and > appendWIthFlush config parameters. > - > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > Fix For: 3.3.0 > > > When the new RestVersion(2019-12-12) is enabled in the backend, enable that > in the driver along with the documentation for the appendWithFlush config and > appendblob.key config values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Affects Version/s: 3.3.0 > ABFS: Enable new Rest Version and add documentation for appendblob and > appendWIthFlush config parameters. > - > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.0 >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-12-12) is enabled in the backend, enable that > in the driver along with the documentation for the appendWithFlush config and > appendblob.key config values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Summary: ABFS: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters. (was: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.) > ABFS: Enable new Rest Version and add documentation for appendblob and > appendWIthFlush config parameters. > - > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-12-12) is enabled in the backend, enable that > in the driver along with the documentation for the appendWithFlush config and > appendblob.key config values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16966) Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
[ https://issues.apache.org/jira/browse/HADOOP-16966?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ishani updated HADOOP-16966: Description: When the new RestVersion(2019-12-12) is enabled in the backend, enable that in the driver along with the documentation for the appendWithFlush config and appendblob.key config values which are possible with the new RestVersion. Configs: fs.azure.enable.appendwithflush fs.azure.appendblob.key was: When the new RestVersion(2019-12-12) is enabled in the backend, enable that in the driver along with the documentation for the appendWithFlush config and appendblob.key config values which are possible with the new RestVersion. > Enable new Rest Version and add documentation for appendblob and > appendWIthFlush config parameters. > --- > > Key: HADOOP-16966 > URL: https://issues.apache.org/jira/browse/HADOOP-16966 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Ishani >Assignee: Ishani >Priority: Major > > When the new RestVersion(2019-12-12) is enabled in the backend, enable that > in the driver along with the documentation for the appendWithFlush config and > appendblob.key config values which are possible with the new RestVersion. > Configs: > fs.azure.enable.appendwithflush > fs.azure.appendblob.key > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16966) Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters.
Ishani created HADOOP-16966: --- Summary: Enable new Rest Version and add documentation for appendblob and appendWIthFlush config parameters. Key: HADOOP-16966 URL: https://issues.apache.org/jira/browse/HADOOP-16966 Project: Hadoop Common Issue Type: Sub-task Components: fs/azure Reporter: Ishani Assignee: Ishani When the new RestVersion(2019-12-12) is enabled in the backend, enable that in the driver along with the documentation for the appendWithFlush config and appendblob.key config values which are possible with the new RestVersion. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org