[jira] [Updated] (HADOOP-14188) Remove the usage of org.mockito.internal.util.reflection.Whitebox
[ https://issues.apache.org/jira/browse/HADOOP-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14188: --- Attachment: HADOOP-14188.05.patch 05 patch: * fixed license error * modified LICENSE.txt > Remove the usage of org.mockito.internal.util.reflection.Whitebox > - > > Key: HADOOP-14188 > URL: https://issues.apache.org/jira/browse/HADOOP-14188 > Project: Hadoop Common > Issue Type: Test > Components: test >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HADOOP-14188.01.patch, HADOOP-14188.02.patch, > HADOOP-14188.03.patch, HADOOP-14188.04.patch, HADOOP-14188.05.patch > > > org.mockito.internal.util.reflection.Whitebox was removed in Mockito 2.1, so > we need to remove the usage to upgrade Mockito. Getter/setter method can be > used instead of this hack. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954540#comment-15954540 ] Mingliang Liu commented on HADOOP-13926: Tested against us-west-1: {code} $ mvn -Dit.test='ITestS3A*,ITestS3Guard*,ITestDynamo*' -Dtest=none -Dscale -Ds3guard -Ddynamo -q clean verify Results : Tests run: 360, Failures: 0, Errors: 0, Skipped: 16 {code} > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926-HADOOP-13345.004.patch, HADOOP-13926-HADOOP-13345.005.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14271) Correct spelling of 'occurred' and variants
[ https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954523#comment-15954523 ] Hudson commented on HADOOP-14271: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11519 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11519/]) HADOOP-14271. Correct spelling of 'occurred' and variants. Contributed (cdouglas: rev 6eba79232f36b36e0196163adc8fe4219a6b6bf9) * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/MultithreadedTestUtil.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/TestContainerLaunch.java * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/java/org/apache/hadoop/mapreduce/lib/jobcontrol/JobControl.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * (edit) hadoop-tools/hadoop-streaming/src/main/java/org/apache/hadoop/streaming/StreamKeyValUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/java/org/apache/hadoop/fs/http/server/FSOperations.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/test/java/org/apache/hadoop/yarn/server/resourcemanager/scheduler/capacity/TestParentQueue.java * (edit) hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSStripedInputStream.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/UTF8ByteArrayUtils.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-applications-distributedshell/src/test/java/org/apache/hadoop/yarn/applications/distributedshell/TestDistributedShell.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Progressable.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * (edit) hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc > Correct spelling of 'occurred' and variants > --- > > Key: HADOOP-14271 > URL: https://issues.apache.org/jira/browse/HADOOP-14271 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Yeliang Cang >Assignee: Yeliang Cang >Priority: Trivial > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6427.001.patch > > > I have find some spelling mistakes in both hdfs and yarn components. The word > "occured" should be "occurred". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14255) S3A to delete unnecessary fake directory objects in mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954510#comment-15954510 ] Hadoop QA commented on HADOOP-14255: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 42s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 54s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 36s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 93m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-14255 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861797/HADOOP-14255.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7de8375beb37 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5faa949 | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12017/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12017/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12017/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Updated] (HADOOP-14271) Correct spelling of 'occurred' and variants
[ https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-14271: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 Release Note: (was: correct spelling mistakes, "occure" vs "occurre", "occures" vs "occurs") Status: Resolved (was: Patch Available) +1 I committed this. Thanks Yeliang > Correct spelling of 'occurred' and variants > --- > > Key: HADOOP-14271 > URL: https://issues.apache.org/jira/browse/HADOOP-14271 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Yeliang Cang >Assignee: Yeliang Cang >Priority: Trivial > Fix For: 3.0.0-alpha3 > > Attachments: YARN-6427.001.patch > > > I have find some spelling mistakes in both hdfs and yarn components. The word > "occured" should be "occurred". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-14271) Correct spelling of 'occurred' and variants
[ https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas reassigned HADOOP-14271: -- Assignee: Yeliang Cang > Correct spelling of 'occurred' and variants > --- > > Key: HADOOP-14271 > URL: https://issues.apache.org/jira/browse/HADOOP-14271 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Yeliang Cang >Assignee: Yeliang Cang >Priority: Trivial > Attachments: YARN-6427.001.patch > > > I have find some spelling mistakes in both hdfs and yarn components. The word > "occured" should be "occurred". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14271) Correct spelling of 'occurred' and variants
[ https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-14271: --- Summary: Correct spelling of 'occurred' and variants (was: Some spelling mistakes "occured" vs "occurred") > Correct spelling of 'occurred' and variants > --- > > Key: HADOOP-14271 > URL: https://issues.apache.org/jira/browse/HADOOP-14271 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Yeliang Cang >Priority: Trivial > Attachments: YARN-6427.001.patch > > > I have find some spelling mistakes in both hdfs and yarn components. The word > "occured" should be "occurred". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Moved] (HADOOP-14271) Some spelling mistakes "occured" vs "occurred"
[ https://issues.apache.org/jira/browse/HADOOP-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas moved YARN-6427 to HADOOP-14271: -- Fix Version/s: (was: 3.0.0-alpha2) Affects Version/s: (was: 3.0.0-alpha2) 3.0.0-alpha2 Key: HADOOP-14271 (was: YARN-6427) Project: Hadoop Common (was: Hadoop YARN) > Some spelling mistakes "occured" vs "occurred" > -- > > Key: HADOOP-14271 > URL: https://issues.apache.org/jira/browse/HADOOP-14271 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Yeliang Cang >Priority: Trivial > Attachments: YARN-6427.001.patch > > > I have find some spelling mistakes in both hdfs and yarn components. The word > "occured" should be "occurred". -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954503#comment-15954503 ] Hadoop QA commented on HADOOP-13926: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 37s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13926 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861801/HADOOP-13926-HADOOP-13345.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7d6b9fb1777b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / 0c32daa | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12018/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12018/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, >
[jira] [Updated] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13926: --- Attachment: HADOOP-13926-HADOOP-13345.005.patch Thanks [~rajesh.balamohan]. The v5 patch adds the javadoc. > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926-HADOOP-13345.004.patch, HADOOP-13926-HADOOP-13345.005.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954481#comment-15954481 ] Rajesh Balamohan commented on HADOOP-13926: --- Thanks for the patch [~liuml07]. Patch LGTM. Very minor comment: {{Listing::createFileStatusListingIterator}} may need to have {{providedStatus}} in its javadoc. > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926-HADOOP-13345.004.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954470#comment-15954470 ] Mingliang Liu commented on HADOOP-14226: The v2 patch looks good to me. It's much clearer. Thanks! > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13345.000.patch, > HADOOP-14226-HADOOP-13345.001.patch, HADOOP-14226-HADOOP-13345.002.patch > > > UPDATE: Instead of changing the test, in this JIRA we make sure for each path > to put, DDBMS::move() has records of each directory up to the root. See > [~fabbri]'s comment in this JIRA for more detail. > After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not > cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the > finally clause though. The reason is that, the internally called method > {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an > item for the parent dest path: > {code} > parent=/fake-bucket, child=moved-here, is_dir=true > {code} > In DynamoDBMetadataStore implementation, we assume that _if a path exists, > all its ancestors will also exist in the table_. We need to pre-create dest > path to maintain this invariant so that test data can be cleaned up > successfully. > I think there may be other tests with the same problem. Let's > identify/address them separately. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14255) S3A to delete unnecessary fake directory objects in mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14255: --- Attachment: HADOOP-14255.001.patch [~ste...@apache.org], thanks for reviewing. {quote} could we have the test actually create the whole list of children, rather than mkdirs(nested)? as today a mkdirs(nested) won't create the parents, but a mkdir(a), (a/b), (a/b/c) will create lots of those ancestors {quote} The v1 patch added one more tests to address the 1st comment. I'd prefer to keep the existing test because that is for testing the behavior that after FileSystem::mkdirs(), all non-existent ancestors (Path, not necessarily S3 fake directory objects) will exist. {quote} Maybe we should add a test for that too: mkdir a mkdir a/b assert a/b exists rm -rf a assert a/b doesn't exist {quote} The 2nd problem you proposed is tested by {{AbstractContractDeleteTest::testDeleteDeepEmptyDir}} which is passing for S3AFileSystem. > S3A to delete unnecessary fake directory objects in mkdirs() > > > Key: HADOOP-14255 > URL: https://issues.apache.org/jira/browse/HADOOP-14255 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14255.000.patch, HADOOP-14255.001.patch > > > In S3AFileSystem, as an optimization, we delete unnecessary fake directory > objects if that directory contains at least one (nested) file. That is done > in closing stream of newly created file. However, if the directory becomes > non-empty after we just create an empty subdirectory, we do not delete its > fake directory object though that fake directory object becomes "unnecessary". > So in {{S3AFileSystem::mkdirs()}}, we have a pending TODO: > {quote} > // TODO: If we have created an empty file at /foo/bar and we then call > // mkdirs for /foo/bar/baz/roo what happens to the empty file /foo/bar/? > private boolean innerMkdirs(Path p, FsPermission permission) > {quote} > This JIRA is to fix the TODO: provide consistent behavior for a fake > directory object between its nested subdirectory and nested file by deleting > it. > See related discussion in [HADOOP-14236]. Thanks [~ste...@apache.org] for > discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954453#comment-15954453 ] Hadoop QA commented on HADOOP-14226: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 21s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-14226 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861790/HADOOP-14226-HADOOP-13345.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1369752fc87b 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / 0c32daa | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12016/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12016/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13
[jira] [Comment Edited] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954432#comment-15954432 ] Aaron Fabbri edited comment on HADOOP-14226 at 4/4/17 1:12 AM: --- This patch looks good [~liuml07]. I changed the test code some as I was working through the code. I wanted a test case with a bit more directory depth, and fails before, but succeeds after, the change to {{DynamoDBMetadataStore}}. Attaching v2 patch which just reworks the test case some. I had to avoid doing a put() of the dest path before the move, since put() would (correctly) create the ancestor directory entries. This test depends on move() to do it, so it fails without your change and succeeds with it. If you are ok with my test changes I can commit the patch after a complete test run. was (Author: fabbri): This patch looks good [~liuml07]. I wanted a test case that failed before, but succeeded after, the change to {{DynamoDBMetadataStore}}. Attaching v2 patch which just reworks the test case some. I had to avoid doing a put() of the dest path before the move, since put() would (correctly) create the ancestor directory entries. This test depends on move() to do it, so it fails without your change and succeeds with it. If you are ok with my test changes I can commit the patch after a complete test run. > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13345.000.patch, > HADOOP-14226-HADOOP-13345.001.patch, HADOOP-14226-HADOOP-13345.002.patch > > > UPDATE: Instead of changing the test, in this JIRA we make sure for each path > to put, DDBMS::move() has records of each directory up to the root. See > [~fabbri]'s comment in this JIRA for more detail. > After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not > cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the > finally clause though. The reason is that, the internally called method > {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an > item for the parent dest path: > {code} > parent=/fake-bucket, child=moved-here, is_dir=true > {code} > In DynamoDBMetadataStore implementation, we assume that _if a path exists, > all its ancestors will also exist in the table_. We need to pre-create dest > path to maintain this invariant so that test data can be cleaned up > successfully. > I think there may be other tests with the same problem. Let's > identify/address them separately. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-14226: -- Attachment: HADOOP-14226-HADOOP-13345.002.patch > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13345.000.patch, > HADOOP-14226-HADOOP-13345.001.patch, HADOOP-14226-HADOOP-13345.002.patch > > > UPDATE: Instead of changing the test, in this JIRA we make sure for each path > to put, DDBMS::move() has records of each directory up to the root. See > [~fabbri]'s comment in this JIRA for more detail. > After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not > cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the > finally clause though. The reason is that, the internally called method > {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an > item for the parent dest path: > {code} > parent=/fake-bucket, child=moved-here, is_dir=true > {code} > In DynamoDBMetadataStore implementation, we assume that _if a path exists, > all its ancestors will also exist in the table_. We need to pre-create dest > path to maintain this invariant so that test data can be cleaned up > successfully. > I think there may be other tests with the same problem. Let's > identify/address them separately. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954432#comment-15954432 ] Aaron Fabbri commented on HADOOP-14226: --- This patch looks good [~liuml07]. I wanted a test case that failed before, but succeeded after, the change to {{DynamoDBMetadataStore}}. Attaching v2 patch which just reworks the test case some. I had to avoid doing a put() of the dest path before the move, since put() would (correctly) create the ancestor directory entries. This test depends on move() to do it, so it fails without your change and succeeds with it. If you are ok with my test changes I can commit the patch after a complete test run. > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13345.000.patch, > HADOOP-14226-HADOOP-13345.001.patch, HADOOP-14226-HADOOP-13345.002.patch > > > UPDATE: Instead of changing the test, in this JIRA we make sure for each path > to put, DDBMS::move() has records of each directory up to the root. See > [~fabbri]'s comment in this JIRA for more detail. > After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not > cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the > finally clause though. The reason is that, the internally called method > {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an > item for the parent dest path: > {code} > parent=/fake-bucket, child=moved-here, is_dir=true > {code} > In DynamoDBMetadataStore implementation, we assume that _if a path exists, > all its ancestors will also exist in the table_. We need to pre-create dest > path to maintain this invariant so that test data can be cleaned up > successfully. > I think there may be other tests with the same problem. Let's > identify/address them separately. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14238) [Umbrella] Rechecking Guava's object is not exposed to user-facing API
[ https://issues.apache.org/jira/browse/HADOOP-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954415#comment-15954415 ] Tsuyoshi Ozawa commented on HADOOP-14238: - Thanks Andrew for the information. Annotation File Utilities, being included in Checker Framework, can extract Annotations included in classes. Hence, we can use it to check whether classes have IA.Public and LimitedPrivate. Let me try. https://checkerframework.org/annotation-file-utilities/#extract-annotations > [Umbrella] Rechecking Guava's object is not exposed to user-facing API > -- > > Key: HADOOP-14238 > URL: https://issues.apache.org/jira/browse/HADOOP-14238 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Tsuyoshi Ozawa >Priority: Critical > > This is reported by [~hitesh] on HADOOP-10101. > At least, AMRMClient#waitFor takes Guava's Supplier instance as an instance. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954408#comment-15954408 ] Hadoop QA commented on HADOOP-13926: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 26s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 56s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 35s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13926 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861782/HADOOP-13926-HADOOP-13345.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 1ead7249e3b5 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / 0c32daa | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12015/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12015/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, >
[jira] [Assigned] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reassigned HADOOP-13926: -- Assignee: Mingliang Liu (was: Steve Loughran) > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Mingliang Liu > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926-HADOOP-13345.004.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13926: --- Attachment: HADOOP-13926-HADOOP-13345.004.patch [~ste...@apache.org] I fixed the two minor comments you provided. I also added the integration test which fails w/o this patch and passes w/ this patch, both with S3Guard enabled. If S3Guard is disabled, the test itself will be skipped. [~fabbri] and [~rajesh.balamohan] you also have time to have a look? Thanks, > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Steve Loughran > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926-HADOOP-13345.004.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14270) HADOOP 2.8 Release tar ball size should be smaller
[ https://issues.apache.org/jira/browse/HADOOP-14270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954275#comment-15954275 ] Andrew Wang commented on HADOOP-14270: -- There might be some create-release fixes that didn't make it to branch-2, recommend we check the git log and reconcile. > HADOOP 2.8 Release tar ball size should be smaller > -- > > Key: HADOOP-14270 > URL: https://issues.apache.org/jira/browse/HADOOP-14270 > Project: Hadoop Common > Issue Type: Bug > Components: build >Reporter: Junping Du >Priority: Critical > > In voting stage of 2.8.0, [~jojochuang] were reporting that 2.8.0 tar ball > are 410 MB while previous release of 2.7.3 is just 205MB. [~andrew.wang] did > some advanced investigations that most of added size are from src-html get > included in tar.gz which is not necessary in release tar ball. > I tried to manually build tar ball (using "mvn package -Pdist,native,docs,src > -DskipTests -Dtar"), and the dist tar ball is still ~200MB. The only > differences is we are using a new dev tool - create-release to create release > bits since 2.8 and we should fix from there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.
[ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954270#comment-15954270 ] Rushabh S Shah commented on HADOOP-14104: - Thanks [~andrew.wang] and [~yzhangal] for reviewing and for your valubale feedback. Will upload a new patch by EOB tomorrow addressing your review comments. > Client should always ask namenode for kms provider path. > > > Key: HADOOP-14104 > URL: https://issues.apache.org/jira/browse/HADOOP-14104 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, > HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch, > HADOOP-14104-trunk-v4.patch > > > According to current implementation of kms provider in client conf, there can > only be one kms. > In multi-cluster environment, if a client is reading encrypted data from > multiple clusters it will only get kms token for local cluster. > Not sure whether the target version is correct or not. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.
[ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954265#comment-15954265 ] Andrew Wang commented on HADOOP-14104: -- Hi Rushabh, thanks for revving. Yongjun and I reviewed this together, posted here are our combined review comments. Looks really good overall! Nits: * Change DFS_KMS_PREFIX to private * Rename getKmsSecretKey to getKeyProviderMapKey (included in item below), since "SecretKey" sounds like an encryption key, a javadoc would also help Bigger things: In DistributedFileSystem, this changes the uri passed to DFSClient: {code} this.dfs = new DFSClient(uri, conf, statistics); this.uri = URI.create(uri.getScheme()+"://"+uri.getAuthority()); {code} to {code} this.uri = URI.create(uri.getScheme()+"://"+uri.getAuthority()); this.dfs = new DFSClient(uri, conf, statistics); {code} To be safe, I'd suggest that we don't change the order of the above code, and instead change the method in DFSClient.java to just grab the scheme and authority: {code} public Text getKmsSecretKey() { return new Text(DFS_KMS_PREFIX + namenodeUri.toString()); } {code} to {code} public Text getKeyProviderMapKey() { return new Text(DFS_KMS_PREFIX + nnUri.getScheme() + "://" + nnUri.getAuthority()); } {code} > Client should always ask namenode for kms provider path. > > > Key: HADOOP-14104 > URL: https://issues.apache.org/jira/browse/HADOOP-14104 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah > Attachments: HADOOP-14104-trunk.patch, HADOOP-14104-trunk-v1.patch, > HADOOP-14104-trunk-v2.patch, HADOOP-14104-trunk-v3.patch, > HADOOP-14104-trunk-v4.patch > > > According to current implementation of kms provider in client conf, there can > only be one kms. > In multi-cluster environment, if a client is reading encrypted data from > multiple clusters it will only get kms token for local cluster. > Not sure whether the target version is correct or not. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-14270) HADOOP 2.8 Release tar ball size should be smaller
Junping Du created HADOOP-14270: --- Summary: HADOOP 2.8 Release tar ball size should be smaller Key: HADOOP-14270 URL: https://issues.apache.org/jira/browse/HADOOP-14270 Project: Hadoop Common Issue Type: Bug Components: build Reporter: Junping Du Priority: Critical In voting stage of 2.8.0, [~jojochuang] were reporting that 2.8.0 tar ball are 410 MB while previous release of 2.7.3 is just 205MB. [~andrew.wang] did some advanced investigations that most of added size are from src-html get included in tar.gz which is not necessary in release tar ball. I tried to manually build tar ball (using "mvn package -Pdist,native,docs,src -DskipTests -Dtar"), and the dist tar ball is still ~200MB. The only differences is we are using a new dev tool - create-release to create release bits since 2.8 and we should fix from there. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14255) S3A to delete unnecessary fake directory objects in mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954217#comment-15954217 ] Steve Loughran commented on HADOOP-14255: - This may be the cause of HADOOP-13230 ; Aaron's problem with trying to delete data under a dir tree. Maybe we should add a test for that too: mkdir a mkdir a/b assert a/b exists rm -rf a assert a/b doesn't exist > S3A to delete unnecessary fake directory objects in mkdirs() > > > Key: HADOOP-14255 > URL: https://issues.apache.org/jira/browse/HADOOP-14255 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14255.000.patch > > > In S3AFileSystem, as an optimization, we delete unnecessary fake directory > objects if that directory contains at least one (nested) file. That is done > in closing stream of newly created file. However, if the directory becomes > non-empty after we just create an empty subdirectory, we do not delete its > fake directory object though that fake directory object becomes "unnecessary". > So in {{S3AFileSystem::mkdirs()}}, we have a pending TODO: > {quote} > // TODO: If we have created an empty file at /foo/bar and we then call > // mkdirs for /foo/bar/baz/roo what happens to the empty file /foo/bar/? > private boolean innerMkdirs(Path p, FsPermission permission) > {quote} > This JIRA is to fix the TODO: provide consistent behavior for a fake > directory object between its nested subdirectory and nested file by deleting > it. > See related discussion in [HADOOP-14236]. Thanks [~ste...@apache.org] for > discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14269) Create module-info.java for each module
[ https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954211#comment-15954211 ] Steve Loughran commented on HADOOP-14269: - I see. we should make sure that IDEs don't fail either; having a separate resource tree may be required for IDEA and eclipse > Create module-info.java for each module > --- > > Key: HADOOP-14269 > URL: https://issues.apache.org/jira/browse/HADOOP-14269 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > > module-info.java is required for JDK9 Jigsaw feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line
[ https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13332: Comment: was deleted (was: is there any way to put the coprocessor downstream, in some module which doesn't go into hadoop- tree itself, but can instead have some POM that pulls in the hadoop and HBase dependencies?) > Remove jackson 1.9.13 and switch all jackson code to 2.x code line > -- > > Key: HADOOP-13332 > URL: https://issues.apache.org/jira/browse/HADOOP-13332 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: PJ Fanning >Assignee: Akira Ajisaka > Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, > HADOOP-13332.02.patch, HADOOP-13332.03.patch > > > This jackson 1.9 code line is no longer maintained. Upgrade > Most changes from jackson 1.9 to 2.x just involve changing the package name. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line
[ https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954209#comment-15954209 ] Steve Loughran commented on HADOOP-13332: - is there any way to put the coprocessor downstream, in some module which doesn't go into hadoop- tree itself, but can instead have some POM that pulls in the hadoop and HBase dependencies? > Remove jackson 1.9.13 and switch all jackson code to 2.x code line > -- > > Key: HADOOP-13332 > URL: https://issues.apache.org/jira/browse/HADOOP-13332 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: PJ Fanning >Assignee: Akira Ajisaka > Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, > HADOOP-13332.02.patch, HADOOP-13332.03.patch > > > This jackson 1.9 code line is no longer maintained. Upgrade > Most changes from jackson 1.9 to 2.x just involve changing the package name. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14259) Verify viewfs works with ADLS
[ https://issues.apache.org/jira/browse/HADOOP-14259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954161#comment-15954161 ] Vishwajeet Dusane commented on HADOOP-14259: [~jzhuge] - I am not clear on {{viewfs://}}, how is it different from HADOOP-14258 - mount table? > Verify viewfs works with ADLS > - > > Key: HADOOP-14259 > URL: https://issues.apache.org/jira/browse/HADOOP-14259 > Project: Hadoop Common > Issue Type: Test > Components: fs/adl, viewfs >Affects Versions: 2.8.0 >Reporter: John Zhuge >Priority: Minor > > Many clusters can share a single ADL store as the default filesystem. In > order to prevent directories of the same names but from different clusters to > collide, use viewfs over ADLS filesystem: > * Set {{fs.defaultFS}} to {{viewfs://clusterX}} for cluster X > * Set {{fs.defaultFS}} to {{viewfs://clusterY}} for cluster Y > * The viewfs client mount table should have entry clusterX and ClusterY > Tasks > * Verify all filesystem operations work as expected, especially rename and > concat > * Verify homedir entry works -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14174) Set default ADLS access token provider type to ClientCredential
[ https://issues.apache.org/jira/browse/HADOOP-14174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954159#comment-15954159 ] Lei (Eddy) Xu commented on HADOOP-14174: +1. It LGTM. It'd be nice to have the inputs from the rest of community. I will hold for 2 days before committing. > Set default ADLS access token provider type to ClientCredential > --- > > Key: HADOOP-14174 > URL: https://issues.apache.org/jira/browse/HADOOP-14174 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/adl >Affects Versions: 2.8.0 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14174.001.patch > > > Split off from a big patch in HADOOP-14038. > Switch {{fs.adl.oauth2.access.token.provider.type}} default from {{Custom}} > to {{ClientCredential}} and add ADLS properties to {{core-default.xml}}. > Fix {{AdlFileSystem#getAccessTokenProvider}} which implies the provider type > is {{Custom}}. > Fix several unit tests that set {{dfs.adls.oauth2.access.token.provider}} but > does not set {{dfs.adls.oauth2.access.token.provider.type}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14226) S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths
[ https://issues.apache.org/jira/browse/HADOOP-14226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14226: --- Summary: S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories of destination paths (was: S3Guard: DynamoDBMetadata::move() should populate ancestor directories of destination paths) > S3Guard: DynamoDBMetadataStore::move() should populate ancestor directories > of destination paths > > > Key: HADOOP-14226 > URL: https://issues.apache.org/jira/browse/HADOOP-14226 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14226-HADOOP-13345.000.patch, > HADOOP-14226-HADOOP-13345.001.patch > > > UPDATE: Instead of changing the test, in this JIRA we make sure for each path > to put, DDBMS::move() has records of each directory up to the root. See > [~fabbri]'s comment in this JIRA for more detail. > After running {{ITestDynamoDBMetadataStoreScale}}, the test data is not > cleaned up. There is a call to {{clearMetadataStore(ms, count);}} in the > finally clause though. The reason is that, the internally called method > {{DynamoDBMetadataStore::deleteSubtree()}} is assuming there should be an > item for the parent dest path: > {code} > parent=/fake-bucket, child=moved-here, is_dir=true > {code} > In DynamoDBMetadataStore implementation, we assume that _if a path exists, > all its ancestors will also exist in the table_. We need to pre-create dest > path to maintain this invariant so that test data can be cleaned up > successfully. > I think there may be other tests with the same problem. Let's > identify/address them separately. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line
[ https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15954013#comment-15954013 ] Vrushali C commented on HADOOP-13332: - Thanks [~busbey]. bq. It would be helpful to the hbase project if you could detail why you need the hbase-server jar in the first place, presuming it isn't for some kind of mapreduce integration. We have a coprocessor on one of the tables that extends BaseRegionObserver. So packages like org.apache.hadoop.hbase.coprocessor and org.apache.hadoop.hbase.regionserver packages have to be imported. We would like to add this utility kind of coprocessor in hbase itself so that timeline service v2 can use it as a client HBASE-17273. > Remove jackson 1.9.13 and switch all jackson code to 2.x code line > -- > > Key: HADOOP-13332 > URL: https://issues.apache.org/jira/browse/HADOOP-13332 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.8.0 >Reporter: PJ Fanning >Assignee: Akira Ajisaka > Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, > HADOOP-13332.02.patch, HADOOP-13332.03.patch > > > This jackson 1.9 code line is no longer maintained. Upgrade > Most changes from jackson 1.9 to 2.x just involve changing the package name. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14268) Fix markdown itemization in hadoop-aws documents
[ https://issues.apache.org/jira/browse/HADOOP-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953960#comment-15953960 ] Hudson commented on HADOOP-14268: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11518 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/11518/]) HADOOP-14268. Fix markdown itemization in hadoop-aws documents. (liuml07: rev 5faa949b782be48ef400d2eb1695f420455de764) * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/testing.md * (edit) hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md > Fix markdown itemization in hadoop-aws documents > > > Key: HADOOP-14268 > URL: https://issues.apache.org/jira/browse/HADOOP-14268 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14268.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14268) Fix markdown itemization in hadoop-aws documents
[ https://issues.apache.org/jira/browse/HADOOP-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14268: --- Component/s: fs/s3 > Fix markdown itemization in hadoop-aws documents > > > Key: HADOOP-14268 > URL: https://issues.apache.org/jira/browse/HADOOP-14268 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14268.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14268) Fix markdown itemization in hadoop-aws documents
[ https://issues.apache.org/jira/browse/HADOOP-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-14268: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha3 2.8.1 Status: Resolved (was: Patch Available) Committed to {{trunk}} and {{branch-2.8}} branches. Thanks for your contribution [~ajisakaa]; thanks for your review [~ste...@apache.org]. > Fix markdown itemization in hadoop-aws documents > > > Key: HADOOP-14268 > URL: https://issues.apache.org/jira/browse/HADOOP-14268 > Project: Hadoop Common > Issue Type: Bug > Components: documentation, fs/s3 >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.8.1, 3.0.0-alpha3 > > Attachments: HADOOP-14268.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14266) S3Guard: S3AFileSystem::listFiles() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-14266?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953827#comment-15953827 ] Steve Loughran commented on HADOOP-14266: - This may be fastest first just to revert to an incremental tree walk. What I Don't want to have happen is people being told not to use this call, because it delivers such a great speedup for raw S3; you can't even attempt a {{listFiles("s3a:://landsat-pds/", true)}} in Hadoop 2.7 without the code appearing to hang, there are too many objects in that bucket for it to handle. > S3Guard: S3AFileSystem::listFiles() to employ MetadataStore > --- > > Key: HADOOP-14266 > URL: https://issues.apache.org/jira/browse/HADOOP-14266 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: HADOOP-13345 >Reporter: Mingliang Liu > > Similar to [HADOOP-13926], this is to track the effort of employing > MetadataStore in {{S3AFileSystem::listFiles()}}. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13926) S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-13926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953818#comment-15953818 ] Steve Loughran commented on HADOOP-13926: - LGTM, only a couple of nits h3. {{Listing}} * L225. could we have an error message which is less likely to be mistaken for the user being told off? "Null status list" would work. h3. {{S3AFileSystem}} Line 2503: can we have java 7 code; maybe a new Acceptor in the Listing class to substitute for a (nice)) lambda expression? I think we are all agreed that this is an interim feature for initial previews, as a production one will need to do the DDB queries as we go along. At some point then, all the params which are passed down to listing, or at least the array, is going to change to something else, such as an iterator. I'm wondering whether this can be adopted today in {{ProvidedLocatedFileStatusIterator}} just by taking an Iterator from the outset. I was thinking we could also do filtering internally by way of Guava's {{Iterator}} helpers, but that won't be the case will it: they aren't for RemoteIterator. Conclusion: probably over-complex right now, this isn't a public API. We can do the more elegant solution once we know what it is we are trying to do. > S3Guard: S3AFileSystem::listLocatedStatus() to employ MetadataStore > --- > > Key: HADOOP-13926 > URL: https://issues.apache.org/jira/browse/HADOOP-13926 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Assignee: Steve Loughran > Attachments: HADOOP-13926-HADOOP-13345.001.patch, > HADOOP-13926-HADOOP-13345.002.patch, HADOOP-13926-HADOOP-13345.003.patch, > HADOOP-13926.wip.proto.branch-13345.1.patch > > > Need to check if {{listLocatedStatus}} can make use of metastore's > listChildren feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14255) S3A to delete unnecessary fake directory objects in mkdirs()
[ https://issues.apache.org/jira/browse/HADOOP-14255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953779#comment-15953779 ] Steve Loughran commented on HADOOP-14255: - could we have the test actually create the whole list of children, rather than {{mkdirs(nested)}}? as today a mkdirs(nested) won't create the parents, but a mkdir(a), (a/b), (a/b/c) will create lots of those ancestors > S3A to delete unnecessary fake directory objects in mkdirs() > > > Key: HADOOP-14255 > URL: https://issues.apache.org/jira/browse/HADOOP-14255 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-14255.000.patch > > > In S3AFileSystem, as an optimization, we delete unnecessary fake directory > objects if that directory contains at least one (nested) file. That is done > in closing stream of newly created file. However, if the directory becomes > non-empty after we just create an empty subdirectory, we do not delete its > fake directory object though that fake directory object becomes "unnecessary". > So in {{S3AFileSystem::mkdirs()}}, we have a pending TODO: > {quote} > // TODO: If we have created an empty file at /foo/bar and we then call > // mkdirs for /foo/bar/baz/roo what happens to the empty file /foo/bar/? > private boolean innerMkdirs(Path p, FsPermission permission) > {quote} > This JIRA is to fix the TODO: provide consistent behavior for a fake > directory object between its nested subdirectory and nested file by deleting > it. > See related discussion in [HADOOP-14236]. Thanks [~ste...@apache.org] for > discussion. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11621) s3a doesn't consider blobs with trailing / and content-length >0 as directories
[ https://issues.apache.org/jira/browse/HADOOP-11621?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953655#comment-15953655 ] Steve Loughran commented on HADOOP-11621: - The encryption patch of HADOOP-13887 includes a fix for this, as when you turn encryption on, even 0 byte files can gain some entries. We are still going to delete anything with a trailing / without caring whether or not its a file, so may want to consider adding a warning note in the release notes there, maybe even in this one & mark it as an incompatible change. > s3a doesn't consider blobs with trailing / and content-length >0 as > directories > --- > > Key: HADOOP-11621 > URL: https://issues.apache.org/jira/browse/HADOOP-11621 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 2.7.0 >Reporter: Denis Jannot > > When creating a directory using the AWS Management Console, the > content-length is set to 0 and s3a works fine. > When creating a directory using other tools, like S3Browse, the > content-length is set to 1 and s3a doesn't work: > S3AFileSystem: Found file (with /): real file? should not happen: dir1 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953599#comment-15953599 ] Elek, Marton commented on HADOOP-14163: --- I uploaded the final proposed version. For the sake of the simplicity I uploaded as a ZIP file instead of a patch. The whole author directory (https:svn.apache.org/repos/asf/hadoop/common/site/main/author) should be replaced the files from the zip. I suggested the following order: 1. git svn clone https://svn.apache.org/repos/asf/hadoop/common/site/main 2. Delete the content the author file and extract the content from the HADOOP-14163-001.zip. (Commit) {code} cd author rm -rf * wget https://issues.apache.org/jira/secure/attachment/12861716/HADOOP-14163-001.zip unzip HADOOP-14163-001.zip mv hadoop-site-proposal-master/* ./ rm -rf hadoop-site-proposal-master {code} 3. Delete the old site and generate the new site. (Don't use the attached rendered site. It's an older version. I didn't upload a rendered site, because I think It should work for at least one committers, not just for me ;-) {code} cd publish \ls -1 | grep -v docs | xargs -n1 rm -rf cd ../author hugo --destination ../publish {code} (Hugo should be installed with brew/apt-get or with downloading the single binary) 4. git push the improved version to a new asf-site branch of the apache git repository 5. File an INFRA issue according to the https://blogs.apache.org/infra/entry/git_based_websites_available > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, hadoop-site.tar.gz, > hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-14163: -- Status: Patch Available (was: Open) > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, hadoop-site.tar.gz, > hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton updated HADOOP-14163: -- Attachment: HADOOP-14163-001.zip > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: HADOOP-14163-001.zip, hadoop-site.tar.gz, > hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14163) Refactor existing hadoop site to use more usable static website generator
[ https://issues.apache.org/jira/browse/HADOOP-14163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953566#comment-15953566 ] Elek, Marton commented on HADOOP-14163: --- [~raviprak] No. It's about the hadoop.apache.org and not the maven generated, version specific documentation sites. But thanks the points, I will check that issue as well... > Refactor existing hadoop site to use more usable static website generator > - > > Key: HADOOP-14163 > URL: https://issues.apache.org/jira/browse/HADOOP-14163 > Project: Hadoop Common > Issue Type: Improvement > Components: site >Reporter: Elek, Marton >Assignee: Elek, Marton > Attachments: hadoop-site.tar.gz, hadop-site-rendered.tar.gz > > > From the dev mailing list: > "Publishing can be attacked via a mix of scripting and revamping the darned > website. Forrest is pretty bad compared to the newer static site generators > out there (e.g. need to write XML instead of markdown, it's hard to review a > staging site because of all the absolute links, hard to customize, did I > mention XML?), and the look and feel of the site is from the 00s. We don't > actually have that much site content, so it should be possible to migrate to > a new system." > This issue is find a solution to migrate the old site to a new modern static > site generator using a more contemprary theme. > Goals: > * existing links should work (or at least redirected) > * It should be easy to add more content required by a release automatically > (most probably with creating separated markdown files) -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953448#comment-15953448 ] Hadoop QA commented on HADOOP-13665: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 48s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13665 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861701/HADOOP-13665.09.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 6ef1ee383c10 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 845529b | | Default Java | 1.8.0_121 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12013/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12013/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12013/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >
[jira] [Updated] (HADOOP-13665) Erasure Coding codec should support fallback coder
[ https://issues.apache.org/jira/browse/HADOOP-13665?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13665: Attachment: HADOOP-13665.09.patch > Erasure Coding codec should support fallback coder > -- > > Key: HADOOP-13665 > URL: https://issues.apache.org/jira/browse/HADOOP-13665 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Wei-Chiu Chuang >Assignee: Kai Sasaki >Priority: Blocker > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13665.01.patch, HADOOP-13665.02.patch, > HADOOP-13665.03.patch, HADOOP-13665.04.patch, HADOOP-13665.05.patch, > HADOOP-13665.06.patch, HADOOP-13665.07.patch, HADOOP-13665.08.patch, > HADOOP-13665.09.patch > > > The current EC codec supports a single coder only (by default pure Java > implementation). If the native coder is specified but is unavailable, it > should fallback to pure Java implementation. > One possible solution is to follow the convention of existing Hadoop native > codec, such as transport encryption (see {{CryptoCodec.java}}). It supports > fallback by specifying two or multiple coders as the value of property, and > loads coders in order. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953332#comment-15953332 ] Hadoop QA commented on HADOOP-11875: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 54s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 46s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 56s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 13m 56s{color} | {color:red} root generated 25 new + 787 unchanged - 0 fixed = 812 total (was 787) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 30s{color} | {color:orange} root: The patch generated 747 new + 1240 unchanged - 200 fixed = 1987 total (was 1440) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 48s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 76 new + 4434 unchanged - 141 fixed = 4510 total (was 4575) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 0 new + 29 unchanged - 132 fixed = 29 total (was 161) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 135 unchanged - 96 fixed = 135 total (was 231) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy generated 0 new + 9 unchanged - 16 fixed = 9 total (was 25) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice generated 0 new + 163 unchanged - 25 fixed = 163 total (was 188) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 382 unchanged - 498 fixed = 382 total (was 880) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m
[jira] [Commented] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953286#comment-15953286 ] Hadoop QA commented on HADOOP-11875: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 14m 57s{color} | {color:red} root generated 25 new + 787 unchanged - 0 fixed = 812 total (was 787) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 26s{color} | {color:orange} root: The patch generated 747 new + 1241 unchanged - 200 fixed = 1988 total (was 1441) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 47s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 76 new + 4434 unchanged - 141 fixed = 4510 total (was 4575) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 0 new + 29 unchanged - 132 fixed = 29 total (was 161) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 135 unchanged - 96 fixed = 135 total (was 231) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy generated 0 new + 9 unchanged - 16 fixed = 9 total (was 25) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice generated 0 new + 163 unchanged - 25 fixed = 163 total (was 188) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 382 unchanged - 498 fixed = 382 total (was 880) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-sharedcachemanager generated 0 new + 3 unchanged -
[jira] [Commented] (HADOOP-14269) Create module-info.java for each module
[ https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953192#comment-15953192 ] Akira Ajisaka commented on HADOOP-14269: Java8 compiler fails when it hits module-info.java. To avoid this problem, I'm thinking we can create a setting for java8 to skip this file. {code:title=pom.xml} java8 1.8 maven-compiler-plugin **/module-info.java {code} > Create module-info.java for each module > --- > > Key: HADOOP-14269 > URL: https://issues.apache.org/jira/browse/HADOOP-14269 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > > module-info.java is required for JDK9 Jigsaw feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14269) Create module-info.java for each module
[ https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953189#comment-15953189 ] Steve Loughran commented on HADOOP-14269: - Maven seems [to cover this |https://maven.apache.org/plugins/maven-compiler-plugin/examples/module-info.html] We can compile the main code as Java 8, while adding in the module-info classes with java9c. This will let us add modularity data to a build which still runs in a Java 8 JVM. Nice > Create module-info.java for each module > --- > > Key: HADOOP-14269 > URL: https://issues.apache.org/jira/browse/HADOOP-14269 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > > module-info.java is required for JDK9 Jigsaw feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14269) Create module-info.java for each module
[ https://issues.apache.org/jira/browse/HADOOP-14269?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953187#comment-15953187 ] Steve Loughran commented on HADOOP-14269: - What does this do for the java7/8 compilers? Do they fail when they hit the file, or explicitly know to skip it? If they fail, we will have to create a separate src/modules/ source tree just for the module-info > Create module-info.java for each module > --- > > Key: HADOOP-14269 > URL: https://issues.apache.org/jira/browse/HADOOP-14269 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Akira Ajisaka > > module-info.java is required for JDK9 Jigsaw feature. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14268) Fix markdown itemization in hadoop-aws documents
[ https://issues.apache.org/jira/browse/HADOOP-14268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953186#comment-15953186 ] Steve Loughran commented on HADOOP-14268: - +1 > Fix markdown itemization in hadoop-aws documents > > > Key: HADOOP-14268 > URL: https://issues.apache.org/jira/browse/HADOOP-14268 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HADOOP-14268.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953180#comment-15953180 ] Hadoop QA commented on HADOOP-11875: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 7m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 11s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 12m 9s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 12m 9s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 28s{color} | {color:orange} root: The patch generated 747 new + 1241 unchanged - 200 fixed = 1988 total (was 1441) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 49s{color} | {color:red} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-common generated 76 new + 4434 unchanged - 141 fixed = 4510 total (was 4575) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-common generated 0 new + 29 unchanged - 132 fixed = 29 total (was 161) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager generated 0 new + 135 unchanged - 96 fixed = 135 total (was 231) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-web-proxy generated 0 new + 9 unchanged - 16 fixed = 9 total (was 25) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-applicationhistoryservice generated 0 new + 163 unchanged - 25 fixed = 163 total (was 188) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager generated 0 new + 382 unchanged - 498 fixed = 382 total (was 880) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-sharedcachemanager
[jira] [Updated] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11875: --- Attachment: HADOOP-11875.07.patch Added a profile to skip compiling the old hamlet when Java version is 9-ea. > [JDK9] Renaming _ as a one-character identifier to another identifier > - > > Key: HADOOP-11875 > URL: https://issues.apache.org/jira/browse/HADOOP-11875 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Tsuyoshi Ozawa >Assignee: Akira Ajisaka > Labels: webapp > Attachments: build_error_dump.txt, HADOOP-11875.01.patch, > HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, > HADOOP-11875.05.patch, HADOOP-11875.06.patch, HADOOP-11875.07.patch > > > From JDK9, _ as a one-character identifier is banned. Currently Web UI uses > it. We should fix them to compile with JDK9. > https://bugs.openjdk.java.net/browse/JDK-8061549 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11875: --- Attachment: HADOOP-11875.06.patch MAPREDUCE-6836 added '\_' in ConfBlock.java. Updated the patch to replace the '\_' with '\_\_'. > [JDK9] Renaming _ as a one-character identifier to another identifier > - > > Key: HADOOP-11875 > URL: https://issues.apache.org/jira/browse/HADOOP-11875 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Tsuyoshi Ozawa >Assignee: Akira Ajisaka > Labels: webapp > Attachments: build_error_dump.txt, HADOOP-11875.01.patch, > HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, > HADOOP-11875.05.patch, HADOOP-11875.06.patch > > > From JDK9, _ as a one-character identifier is banned. Currently Web UI uses > it. We should fix them to compile with JDK9. > https://bugs.openjdk.java.net/browse/JDK-8061549 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11875) [JDK9] Renaming _ as a one-character identifier to another identifier
[ https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-11875: --- Issue Type: Sub-task (was: Bug) Parent: HADOOP-11123 > [JDK9] Renaming _ as a one-character identifier to another identifier > - > > Key: HADOOP-11875 > URL: https://issues.apache.org/jira/browse/HADOOP-11875 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Tsuyoshi Ozawa >Assignee: Akira Ajisaka > Labels: webapp > Attachments: build_error_dump.txt, HADOOP-11875.01.patch, > HADOOP-11875.02.patch, HADOOP-11875.03.patch, HADOOP-11875.04.patch, > HADOOP-11875.05.patch > > > From JDK9, _ as a one-character identifier is banned. Currently Web UI uses > it. We should fix them to compile with JDK9. > https://bugs.openjdk.java.net/browse/JDK-8061549 -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953083#comment-15953083 ] Akira Ajisaka commented on HADOOP-14178: FYI: This blocks HADOOP-11123 since Mockito 1.x does not support Java9. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Minor > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14104) Client should always ask namenode for kms provider path.
[ https://issues.apache.org/jira/browse/HADOOP-14104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953078#comment-15953078 ] Hadoop QA commented on HADOOP-14104: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 33s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 13m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 5s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 3s{color} | {color:orange} root: The patch generated 9 new + 475 unchanged - 2 fixed = 484 total (was 477) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 57s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 4s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}179m 11s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestKDiag | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-14104 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12861654/HADOOP-14104-trunk-v4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml cc | | uname | Linux 49d97b6d6637 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git r
[jira] [Commented] (HADOOP-13915) [JDK9] Compilation failure in hadoop-auth-examples module
[ https://issues.apache.org/jira/browse/HADOOP-13915?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15953069#comment-15953069 ] Akira Ajisaka commented on HADOOP-13915: Another workaround: {{$ MAVEN_OPTS="--permit-illegal-access" mvn install -DskipTests}} http://jigsaw-dev.1059479.n5.nabble.com/Better-tools-for-adjusting-to-strong-encapsulation-td5715904.html > [JDK9] Compilation failure in hadoop-auth-examples module > - > > Key: HADOOP-13915 > URL: https://issues.apache.org/jira/browse/HADOOP-13915 > Project: Hadoop Common > Issue Type: Sub-task > Components: build > Environment: OpenJDK 9-ea+149 >Reporter: Akira Ajisaka > > {{mvn install -DskipTests}} fails. > {noformat} > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-war-plugin:2.4:war (default-war) on project > hadoop-auth-examples: Execution default-war of goal > org.apache.maven.plugins:maven-war-plugin:2.4:war failed: Unable to load the > mojo 'war' in the plugin 'org.apache.maven.plugins:maven-war-plugin:2.4' due > to an API incompatibility: > org.codehaus.plexus.component.repository.exception.ComponentLookupException: > null > [ERROR] - > [ERROR] realm =plugin>org.apache.maven.plugins:maven-war-plugin:2.4 > [ERROR] strategy = org.codehaus.plexus.classworlds.strategy.SelfFirstStrategy > [ERROR] urls[0] = > file:/home/centos/.m2/repository/org/apache/maven/plugins/maven-war-plugin/2.4/maven-war-plugin-2.4.jar > [ERROR] urls[1] = > file:/home/centos/.m2/repository/org/apache/maven/reporting/maven-reporting-api/2.0.6/maven-reporting-api-2.0.6.jar > [ERROR] urls[2] = > file:/home/centos/.m2/repository/org/apache/maven/doxia/doxia-sink-api/1.0-alpha-7/doxia-sink-api-1.0-alpha-7.jar > [ERROR] urls[3] = > file:/home/centos/.m2/repository/commons-cli/commons-cli/1.0/commons-cli-1.0.jar > [ERROR] urls[4] = > file:/home/centos/.m2/repository/org/codehaus/plexus/plexus-interactivity-api/1.0-alpha-4/plexus-interactivity-api-1.0-alpha-4.jar > [ERROR] urls[5] = > file:/home/centos/.m2/repository/org/apache/maven/maven-archiver/2.5/maven-archiver-2.5.jar > [ERROR] urls[6] = > file:/home/centos/.m2/repository/org/codehaus/plexus/plexus-io/2.0.7/plexus-io-2.0.7.jar > [ERROR] urls[7] = > file:/home/centos/.m2/repository/commons-io/commons-io/2.2/commons-io-2.2.jar > [ERROR] urls[8] = > file:/home/centos/.m2/repository/org/codehaus/plexus/plexus-archiver/2.4.1/plexus-archiver-2.4.1.jar > [ERROR] urls[9] = > file:/home/centos/.m2/repository/org/apache/commons/commons-compress/1.5/commons-compress-1.5.jar > [ERROR] urls[10] = > file:/home/centos/.m2/repository/org/tukaani/xz/1.2/xz-1.2.jar > [ERROR] urls[11] = > file:/home/centos/.m2/repository/org/codehaus/plexus/plexus-interpolation/1.15/plexus-interpolation-1.15.jar > [ERROR] urls[12] = > file:/home/centos/.m2/repository/junit/junit/3.8.1/junit-3.8.1.jar > [ERROR] urls[13] = > file:/home/centos/.m2/repository/com/thoughtworks/xstream/xstream/1.4.2/xstream-1.4.2.jar > [ERROR] urls[14] = > file:/home/centos/.m2/repository/xmlpull/xmlpull/1.1.3.1/xmlpull-1.1.3.1.jar > [ERROR] urls[15] = > file:/home/centos/.m2/repository/xpp3/xpp3_min/1.1.4c/xpp3_min-1.1.4c.jar > [ERROR] urls[16] = > file:/home/centos/.m2/repository/org/codehaus/plexus/plexus-utils/3.0.10/plexus-utils-3.0.10.jar > [ERROR] urls[17] = > file:/home/centos/.m2/repository/org/apache/maven/shared/maven-filtering/1.1/maven-filtering-1.1.jar > [ERROR] urls[18] = > file:/home/centos/.m2/repository/org/sonatype/plexus/plexus-build-api/0.0.4/plexus-build-api-0.0.4.jar > [ERROR] Number of foreign imports: 1 > [ERROR] import: Entry[import from realm > ClassRealm[project>org.apache.hadoop:hadoop-main:3.0.0-alpha2-SNAPSHOT, > parent: ClassRealm[maven.api, parent: null]]] > [ERROR] > [ERROR] -: > ExceptionInInitializerError: Unable to make field private final > java.util.Comparator java.util.TreeMap.comparator accessible: module > java.base does not "opens java.util" to unnamed module @8ad2460 > {noformat} -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org