[jira] [Commented] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724706#comment-15724706 ] Hudson commented on HADOOP-13861: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10947 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10947/]) HADOOP-13861. Spelling errors in logging and exceptions for code. (wang: rev 7b988e88992528a0cac2ca8893652c5d4a90c6b9) * (edit) hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/ZKSignerSecretProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/LocalDirAllocator.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/KDiag.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/LocalJavaKeyStoreProvider.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/Utils.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/erasurecode/rawcoder/util/GF256.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/CommandFormat.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/file/tfile/TFile.java > Spelling errors in logging and exceptions for code > -- > > Key: HADOOP-13861 > URL: https://issues.apache.org/jira/browse/HADOOP-13861 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs, io, security >Reporter: Grant Sohn >Assignee: Grant Sohn > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13861.1.patch > > > Found a set of spelling errors in the logging and exception messages. > Examples: > Bufer -> Buffer > princial -> principal > existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13861) Spelling errors in logging and exceptions for code
[ https://issues.apache.org/jira/browse/HADOOP-13861?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13861: - Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) +1 LGTM, thanks for the contribution Grant! I've committed this to trunk, branch-2, branch-2.8 after doing some small fixups for branch-2. > Spelling errors in logging and exceptions for code > -- > > Key: HADOOP-13861 > URL: https://issues.apache.org/jira/browse/HADOOP-13861 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs, io, security >Reporter: Grant Sohn >Assignee: Grant Sohn > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13861.1.patch > > > Found a set of spelling errors in the logging and exception messages. > Examples: > Bufer -> Buffer > princial -> principal > existance -> existence -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13586) Hadoop 3.0 build broken on windows
[ https://issues.apache.org/jira/browse/HADOOP-13586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang resolved HADOOP-13586. -- Resolution: Cannot Reproduce I'm going to close this since Chris tested this in late October, and there haven't been any responses to requests for additional info. If we get additional info, we can reopen to track a fix. > Hadoop 3.0 build broken on windows > -- > > Key: HADOOP-13586 > URL: https://issues.apache.org/jira/browse/HADOOP-13586 > Project: Hadoop Common > Issue Type: Bug > Components: build >Affects Versions: 3.0.0-alpha1 > Environment: Windows Server >Reporter: Steve Loughran >Priority: Blocker > > Builds on windows fail, even before getting to the native bits > Looks like dev-support/bin/dist-copynativelibs isn't windows-ready -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13793: --- Fix Version/s: HADOOP-13345 > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Fix For: HADOOP-13345 > > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch, HADOOP-13793-HADOOP-13345.003.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13793: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Integration s3a test against US standard. +1 Committed to {{HADOOP-13345}} feature branch. Thanks for your great work [~fabbri]. > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch, HADOOP-13793-HADOOP-13345.003.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724622#comment-15724622 ] Varun Vasudev commented on HADOOP-13835: Thanks for the review and commit [~ajisakaa]! > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724400#comment-15724400 ] Hudson commented on HADOOP-13835: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10945 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10945/]) HADOOP-13835. Move Google Test Framework code from mapreduce to (aajisaka: rev b2a3d6c519d83283a49b0d2172dcf1de97f9c4bc) * (add) hadoop-common-project/hadoop-common/src/main/native/gtest/include/gtest/gtest.h * (edit) LICENSE.txt * (delete) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/include/gtest/gtest.h * (add) hadoop-common-project/hadoop-common/src/main/native/gtest/gtest-all.cc * (delete) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/main/native/gtest/gtest-all.cc * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/pom.xml * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask/src/CMakeLists.txt * (edit) hadoop-common-project/hadoop-common/pom.xml > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13835: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) Committed this to trunk. Thanks [~vvasudev] for the contribution. > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13835: --- Component/s: test > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task > Components: test >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724120#comment-15724120 ] Hadoop QA commented on HADOOP-13793: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-13345 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 47s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13793 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841873/HADOOP-13793-HADOOP-13345.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7becacca0b6e 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13345 / cfd0fbf | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11201/testReport/ | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11201/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch, HADOOP-13793-HADOOP-13345.003.patch > > > Many of us share concerns that testing the
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724117#comment-15724117 ] Fei Hui commented on HADOOP-13865: -- I find the code with Hive 2.0.0 > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13793: -- Attachment: HADOOP-13793-HADOOP-13345.003.patch Attaching v3 patch addressing [~liuml07]'s comments. - Increase inconsistency time period 10 x from 0.5 seconds to 5.0 seconds. - Don't hard code LocalMetadataStore in ITestS3GuardListInconsistency: Besides testing s3a logic is correct, other MetadataStores may wish to integration test with this as well. - Skip the inconsistency test if NullMetadataStore is enabled (it would fail). - Clean up duplicate comment and no-op method overrides. > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch, HADOOP-13793-HADOOP-13345.003.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13793: -- Status: Patch Available (was: Open) > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch, HADOOP-13793-HADOOP-13345.003.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri updated HADOOP-13793: -- Status: Open (was: Patch Available) > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13864) KMS should not require truststore password
[ https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724023#comment-15724023 ] Hudson commented on HADOOP-13864: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10943 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10943/]) HADOOP-13864. KMS should not require truststore password. Contributed by (xiao: rev a2b5d602201a4f619f6a68ec2168a884190d8de6) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/FileBasedKeyStoresFactory.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java > KMS should not require truststore password > -- > > Key: HADOOP-13864 > URL: https://issues.apache.org/jira/browse/HADOOP-13864 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Mike Yoder >Assignee: Mike Yoder > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13864.000.patch > > > Trust store passwords are actually not required for read operations. They're > only needed for writing to the trust store; in reads they serve as an > integrity check. Normal hadoop sslclient.xml files don't require the > truststore password, but when the KMS is used it's required. > If I don't specify a hadoop trust store password I get: > {noformat} > Failed to start namenode. > java.io.IOException: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324) > at > org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) > at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65) > at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920) > at > org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616) > Caused by: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426) > ... 14 more > {noformat} > Note that this _does not_ happen to the namenode when the kms isn't in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13827) Add reencryptEncryptedKey interface to KMS
[ https://issues.apache.org/jira/browse/HADOOP-13827?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724021#comment-15724021 ] Andrew Wang commented on HADOOP-13827: -- +1 LGTM, thanks for working on this Xiao! > Add reencryptEncryptedKey interface to KMS > -- > > Key: HADOOP-13827 > URL: https://issues.apache.org/jira/browse/HADOOP-13827 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13827.02.patch, HADOOP-13827.03.patch, > HDFS-11159.01.patch > > > This is the KMS part. Please refer to HDFS-10899 for the design doc. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13852) hadoop build to allow hadoop version property to be explicitly set
[ https://issues.apache.org/jira/browse/HADOOP-13852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15724011#comment-15724011 ] Andrew Wang commented on HADOOP-13852: -- Thanks for the explanation Steve. This seems harmless, so I'm +1 pending the YARN instance also being addressed. > hadoop build to allow hadoop version property to be explicitly set > -- > > Key: HADOOP-13852 > URL: https://issues.apache.org/jira/browse/HADOOP-13852 > Project: Hadoop Common > Issue Type: New Feature > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13852-001.patch > > > Hive (and transitively) Spark, won't start on Hadoop 3.x as the shim layer > rejects Hadoop v3. As a workaround pending a Hive fix, allow the build to > have the Hadoop version (currently set to pom.version) to be overridden > manually. > This will not affect version names of artifacts, merely the declared Hadoop > version visible in {{VersionInfo.getVersion()}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13864) KMS should not require truststore password
[ https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13864: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) Committed to trunk. Thanks [~yoderme] for the contribution, and [~andrew.wang] for offline reviews! > KMS should not require truststore password > -- > > Key: HADOOP-13864 > URL: https://issues.apache.org/jira/browse/HADOOP-13864 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Mike Yoder >Assignee: Mike Yoder > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13864.000.patch > > > Trust store passwords are actually not required for read operations. They're > only needed for writing to the trust store; in reads they serve as an > integrity check. Normal hadoop sslclient.xml files don't require the > truststore password, but when the KMS is used it's required. > If I don't specify a hadoop trust store password I get: > {noformat} > Failed to start namenode. > java.io.IOException: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324) > at > org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) > at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65) > at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920) > at > org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616) > Caused by: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426) > ... 14 more > {noformat} > Note that this _does not_ happen to the namenode when the kms isn't in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13864) KMS should not require truststore password
[ https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723980#comment-15723980 ] Xiao Chen commented on HADOOP-13864: Failed test looks unrelated and passed locally. This commit results in a behavior change, but since as Mike explained here I think we should consider it a bug, hence not marking incompatible. > KMS should not require truststore password > -- > > Key: HADOOP-13864 > URL: https://issues.apache.org/jira/browse/HADOOP-13864 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Mike Yoder >Assignee: Mike Yoder > Attachments: HADOOP-13864.000.patch > > > Trust store passwords are actually not required for read operations. They're > only needed for writing to the trust store; in reads they serve as an > integrity check. Normal hadoop sslclient.xml files don't require the > truststore password, but when the KMS is used it's required. > If I don't specify a hadoop trust store password I get: > {noformat} > Failed to start namenode. > java.io.IOException: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324) > at > org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) > at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65) > at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920) > at > org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616) > Caused by: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426) > ... 14 more > {noformat} > Note that this _does not_ happen to the namenode when the kms isn't in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts
[ https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723968#comment-15723968 ] Xiao Chen commented on HADOOP-13780: Regarding #1, thanks [~ajisakaa] for the commands from HADOOP-12893, I built a new output at https://gist.github.com/xiao-chen/6131ec9718ec4b1af286f048bd714c6f . Also looked at Apache Rat which seems too naive, and Apache Whisker which isn't documented clear enough (to me). Quick look at #2: - {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}} is actually noted in LICENSE! It is named with .gz extension. Since HADOOP-10075 removed the .gz and left the extracted .js, I think updating the name and move it to MIT License section in our LICENSE should suffice. This is legal since the header of that file says it's MIT, and apache [need not|http://www.apache.org/dev/licensing-howto.html#mod-notice] that to be in the NOTICE. Bad news is, those {{js}}, {{css}} or anything outside of a maven dependency isn't checked by the tool. :( > LICENSE/NOTICE are out of date for source artifacts > --- > > Key: HADOOP-13780 > URL: https://issues.apache.org/jira/browse/HADOOP-13780 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Xiao Chen >Priority: Blocker > > we need to perform a check that all of our bundled works are properly > accounted for in our LICENSE/NOTICE files. > At a minimum, it looks like HADOOP-10075 introduced some changes that have > not been accounted for. > e.g. the jsTree plugin found at > {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}} > does not show up in LICENSE.txt to (a) indicate that we're redistributing it > under the MIT option and (b) give proper citation of the original copyright > holder per ASF policy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader
[ https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sangjin Lee updated HADOOP-13398: - Description: Today, a user class is able to trigger loading a class from Hadoop's dependencies, with or without the use of {{ApplicationClassLoader}}, and it creates an implicit dependence from users' code on Hadoop's dependencies, and as a result dependency conflicts. We should modify {{ApplicationClassLoader}} to prevent a user class from loading a class from the parent classpath. This should also cover resource loading (including {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}). was: Today, a user class is able to trigger loading a class from Hadoop's dependencies, with or without the use of {{ApplicationClassLoader}}, and it creates an implicit dependence from users' code on Hadoop's dependencies, and as a result dependency conflicts. We should modify {{ApplicationClassLoader}} to prevent a user class from loading a class from the parent classpath. This should also cover resource loading (and as a corollary {{ServiceLoader}}). > prevent user classes from loading classes in the parent classpath with > ApplicationClassLoader > - > > Key: HADOOP-13398 > URL: https://issues.apache.org/jira/browse/HADOOP-13398 > Project: Hadoop Common > Issue Type: Sub-task > Components: util >Reporter: Sangjin Lee >Assignee: Sangjin Lee >Priority: Critical > Attachments: HADOOP-13398-HADOOP-13070.01.patch > > > Today, a user class is able to trigger loading a class from Hadoop's > dependencies, with or without the use of {{ApplicationClassLoader}}, and it > creates an implicit dependence from users' code on Hadoop's dependencies, and > as a result dependency conflicts. > We should modify {{ApplicationClassLoader}} to prevent a user class from > loading a class from the parent classpath. > This should also cover resource loading (including > {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723790#comment-15723790 ] Mingliang Liu commented on HADOOP-13793: By the way, I justed consolidated the patch with [HADOOP-13449] and run integration tests over DynamoDB. This inconsistency integration can pass using DynamoDBMetadataStore (delay 50s, run 10 times without failure; using NullMetadataStore it fails 8 of 10 times). I'll rebase that patch again after I commit new patch for this one (hopefully clean Jenkins). Perhaps we can make the delay even larger? say 10s. Thanks, > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts
[ https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723765#comment-15723765 ] Xiao Chen commented on HADOOP-13780: Thanks [~busbey] for reporting this. I'd like to take a shot at this one to move alpha2 forward. It seems more things are added since Akira's last update (188 lines now in my run today https://gist.github.com/xiao-chen/336b64b1b17e8813fd5b980013ac7eb4) I plan to do the following things here: # Fix the diff in L since HADOOP-12893, in a similar way. # Manually fix the jstree stuff, and others turned out missing. Looks like this has to be manual, without some sophisticated tooling. As Robert said, HADOOP-10075 only extracted that jquery.jstree.js.gz, which was committed by YARN-1. # Add a way to verify this in pre-commit, so this work in the future will be upfront. 1 and 2 should unblock the release, 3 would make our lives easier. > LICENSE/NOTICE are out of date for source artifacts > --- > > Key: HADOOP-13780 > URL: https://issues.apache.org/jira/browse/HADOOP-13780 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Xiao Chen >Priority: Blocker > > we need to perform a check that all of our bundled works are properly > accounted for in our LICENSE/NOTICE files. > At a minimum, it looks like HADOOP-10075 introduced some changes that have > not been accounted for. > e.g. the jsTree plugin found at > {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}} > does not show up in LICENSE.txt to (a) indicate that we're redistributing it > under the MIT option and (b) give proper citation of the original copyright > holder per ASF policy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723683#comment-15723683 ] Ted Yu commented on HADOOP-13866: - Haibo: If you have bandwidth, feel free to attach patch(es). I may be busy with other work. Thanks > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723679#comment-15723679 ] Mingliang Liu commented on HADOOP-13793: Another nit is that the DefaultS3ClientFactory should have only one javadoc. {code} 39 /** 40 * The default factory implementation, which calls the AWS SDK to configure 41 * and create an {@link AmazonS3Client} that communicates with the S3 service. 42 */ 43 44 /** 45 * The default factory implementation, which calls the AWS SDK to configure 46 * and create an {@link AmazonS3Client} that communicates with the S3 service. 47 */ {code} > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-13866: Assignee: Ted Yu (was: Haibo Chen) > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibo Chen updated HADOOP-13866: Assignee: Haibo Chen > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Haibo Chen > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723584#comment-15723584 ] Aaron Fabbri commented on HADOOP-13793: --- Thanks for the review [~liuml07]. These are good comments. I will address them and post a new patch shortly. > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723516#comment-15723516 ] Mingliang Liu commented on HADOOP-13793: {code} 51@Override 52public void teardown() throws Exception { 53 super.teardown(); 54} 55 56@Override 57public void nameThread() { 58 super.nameThread(); 59} 60 61@Override 62protected int getTestTimeoutMillis() { 63 return super.getTestTimeoutMillis(); 64} {code} Can we remove this code lines? Seems unnecessary. For the inconsistent key delay in ms, I found 500ms are not enough to make the test failure (when commenting out the {{S3Guard.S3_METADATA_STORE_IMPL}} config key in test). I tried 1000ms+ and found it works. Do you mind increasing the default interval to 1~3 seconds? In the {{ITestS3GuardListConsistency#createContract}}, this is setting it hard-code. Can we let use the config files so we can run with DDBMetadataStore? {code} conf.setClass(S3Guard.S3_METADATA_STORE_IMPL, LocalMetadataStore.class, MetadataStore.class); {code} Thanks, > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723508#comment-15723508 ] Hadoop QA commented on HADOOP-13866: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 1m 39s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 1m 39s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13866 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841811/HADOOP-13866.v2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 68a6b37391f0 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk
[jira] [Assigned] (HADOOP-13780) LICENSE/NOTICE are out of date for source artifacts
[ https://issues.apache.org/jira/browse/HADOOP-13780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen reassigned HADOOP-13780: -- Assignee: Xiao Chen > LICENSE/NOTICE are out of date for source artifacts > --- > > Key: HADOOP-13780 > URL: https://issues.apache.org/jira/browse/HADOOP-13780 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Busbey >Assignee: Xiao Chen >Priority: Blocker > > we need to perform a check that all of our bundled works are properly > accounted for in our LICENSE/NOTICE files. > At a minimum, it looks like HADOOP-10075 introduced some changes that have > not been accounted for. > e.g. the jsTree plugin found at > {{hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/resources/webapps/static/jt/jquery.jstree.js}} > does not show up in LICENSE.txt to (a) indicate that we're redistributing it > under the MIT option and (b) give proper citation of the original copyright > holder per ASF policy. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723411#comment-15723411 ] Haibo Chen commented on HADOOP-13866: - My quick check shows that upgrading to 4.1.1.Final breaks DtpHttp2Handler. Looks like 4.1.1.Final removed some methods that hdfs current uses. > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10930) HarFsInputStream should implement PositionedReadable with thead-safe.
[ https://issues.apache.org/jira/browse/HADOOP-10930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723404#comment-15723404 ] Hudson commented on HADOOP-10930: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10941 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10941/]) Revert "HADOOP-10930. Refactor: Wrap Datanode IO related operations. (xyao: rev dcedb72af468128458e597f08d22f5c34b744ae5) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockSender.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataStorage.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetAsyncDiskService.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaInputStreams.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/BlockPoolSlice.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockReceiver.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DNConf.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplica.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/ReplicaOutputStreams.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/LocalReplicaInPipeline.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestFileAppend.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestSimulatedFSDataset.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ReplicaInPipeline.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalReplicaInPipeline.java > HarFsInputStream should implement PositionedReadable with thead-safe. > - > > Key: HADOOP-10930 > URL: https://issues.apache.org/jira/browse/HADOOP-10930 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: Yi Liu >Assignee: Yi Liu > Labels: BB2015-05-TBR > Attachments: HADOOP-10930.001.patch > > > {{PositionedReadable}} definition requires the implementations for its > interfaces should be thread-safe. > HarFsInputStream doesn't implement these interfaces with tread-safe, this > JIRA is to fix this. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-13866: Attachment: HADOOP-13866.v2.patch > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723337#comment-15723337 ] Ted Yu commented on HADOOP-13866: - If I name the patch HDFS-13866.v1.patch, would QA report back test result to this JIRA ? > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723330#comment-15723330 ] Steve Loughran commented on HADOOP-13866: - really we should kick off the build and test for all modules. Ted, can you also submit this as a patch for HDFS too? That's the module pulling in netty. thx > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723327#comment-15723327 ] Ted Yu commented on HADOOP-13866: - netty-all is used by : hadoop-hdfs-project/hadoop-hdfs-client/pom.xml: netty-all hadoop-hdfs-project/hadoop-hdfs/pom.xml: netty-all Do you have suggestion which file I should introduce a trivial change to trigger QA run ? > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723315#comment-15723315 ] Andrew Wang commented on HADOOP-13866: -- It seems like precommit also didn't run any unit tests. We should do a unit test run for affected components. > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723310#comment-15723310 ] Andrew Wang commented on HADOOP-13866: -- Is this an incompatible change? > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13675) Bug in return value for delete() calls in WASB
[ https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723285#comment-15723285 ] Hudson commented on HADOOP-13675: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10940 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10940/]) HADOOP-13675. Bug in return value for delete() calls in WASB. (liuml07: rev 15dd1f3381069c5fdc6690e3ab1907a133ba14bf) * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemConcurrencyLive.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeFileSystemStore.java > Bug in return value for delete() calls in WASB > -- > > Key: HADOOP-13675 > URL: https://issues.apache.org/jira/browse/HADOOP-13675 > Project: Hadoop Common > Issue Type: Bug > Components: azure, fs/azure >Affects Versions: 2.8.0 >Reporter: Dushyanth >Assignee: Dushyanth > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13675.001.patch, HADOOP-13675.002.patch, > HADOOP-13675.003.patch, HADOOP-13675.004.patch > > > Current implementation of WASB does not correctly handle multiple > threads/clients calling delete on the same file. The expected behavior in > such scenarios is only one of the thread should delete the file and return > true, while all other threads should receive false. However in the current > implementation even though only one thread deletes the file, multiple clients > incorrectly get "true" as the return from delete() call.. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13864) KMS should not require truststore password
[ https://issues.apache.org/jira/browse/HADOOP-13864?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723264#comment-15723264 ] Xiao Chen commented on HADOOP-13864: Thanks [~yoderme] for reporting the issue and providing a patch! +1, will commit this end of today if no objections. > KMS should not require truststore password > -- > > Key: HADOOP-13864 > URL: https://issues.apache.org/jira/browse/HADOOP-13864 > Project: Hadoop Common > Issue Type: Bug > Components: kms, security >Reporter: Mike Yoder >Assignee: Mike Yoder > Attachments: HADOOP-13864.000.patch > > > Trust store passwords are actually not required for read operations. They're > only needed for writing to the trust store; in reads they serve as an > integrity check. Normal hadoop sslclient.xml files don't require the > truststore password, but when the KMS is used it's required. > If I don't specify a hadoop trust store password I get: > {noformat} > Failed to start namenode. > java.io.IOException: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:428) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:333) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:324) > at > org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95) > at org.apache.hadoop.util.KMSUtil.createKeyProvider(KMSUtil.java:65) > at org.apache.hadoop.hdfs.DFSUtil.createKeyProvider(DFSUtil.java:1920) > at > org.apache.hadoop.hdfs.DFSUtil.createKeyProviderCryptoExtension(DFSUtil.java:1934) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesystem.java:811) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:770) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:844) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:823) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1548) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1616) > Caused by: java.security.GeneralSecurityException: The property > 'ssl.client.truststore.password' has not been set in the ssl configuration > file. > at > org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory.init(FileBasedKeyStoresFactory.java:199) > at org.apache.hadoop.security.ssl.SSLFactory.init(SSLFactory.java:131) > at > org.apache.hadoop.crypto.key.kms.KMSClientProvider.(KMSClientProvider.java:426) > ... 14 more > {noformat} > Note that this _does not_ happen to the namenode when the kms isn't in use. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723265#comment-15723265 ] Hadoop QA commented on HADOOP-13866: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13866 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841801/HADOOP-13866.v1.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 5ce3c2c160da 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 15dd1f3 | | Default Java | 1.8.0_111 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11199/testReport/ | | modules | C: hadoop-project U: hadoop-project | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11199/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13793) s3guard: add inconsistency injection, integration tests
[ https://issues.apache.org/jira/browse/HADOOP-13793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723261#comment-15723261 ] Mingliang Liu commented on HADOOP-13793: Thanks for this excellent work here, [~fabbri]. I think the patch looks good overall. I'll run the integration tests shortly and commit it after that. Thank, > s3guard: add inconsistency injection, integration tests > --- > > Key: HADOOP-13793 > URL: https://issues.apache.org/jira/browse/HADOOP-13793 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Aaron Fabbri >Assignee: Aaron Fabbri > Attachments: HADOOP-13793-HADOOP-13345.001.patch, > HADOOP-13793-HADOOP-13345.002.patch > > > Many of us share concerns that testing the consistency features of S3Guard > will be difficult if we depend on the rare and unpredictable occurrence of > actual inconsistency in S3 to exercise those code paths. > I think we should have a mechanism for injecting failure to force exercising > of the consistency codepaths in S3Guard. > Requirements: > - Integration tests that cause S3A to see the types of inconsistency we > address with S3Guard. > - These are deterministic integration tests. > Unit tests are possible as well, if we were to stub out the S3Client. That > may be less bang for the buck, though. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13675) Bug in return value for delete() calls in WASB
[ https://issues.apache.org/jira/browse/HADOOP-13675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13675: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 Status: Resolved (was: Patch Available) Committed to {{trunk}} and {{branch-2}} branches. Thanks for your contribution [~dchickabasapa] and [~jameeln]. Thanks [~cnauroth] for review. > Bug in return value for delete() calls in WASB > -- > > Key: HADOOP-13675 > URL: https://issues.apache.org/jira/browse/HADOOP-13675 > Project: Hadoop Common > Issue Type: Bug > Components: azure, fs/azure >Affects Versions: 2.8.0 >Reporter: Dushyanth >Assignee: Dushyanth > Fix For: 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13675.001.patch, HADOOP-13675.002.patch, > HADOOP-13675.003.patch, HADOOP-13675.004.patch > > > Current implementation of WASB does not correctly handle multiple > threads/clients calling delete on the same file. The expected behavior in > such scenarios is only one of the thread should delete the file and return > true, while all other threads should receive false. However in the current > implementation even though only one thread deletes the file, multiple clients > incorrectly get "true" as the return from delete() call.. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-13866: Status: Patch Available (was: Open) > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
[ https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-13866: Attachment: HADOOP-13866.v1.patch > Upgrade netty-all to 4.1.1.Final > > > Key: HADOOP-13866 > URL: https://issues.apache.org/jira/browse/HADOOP-13866 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu > Attachments: HADOOP-13866.v1.patch > > > netty-all 4.1.1.Final is stable release which we should upgrade to. > See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12927) Update netty-all to 4.0.34.Final
[ https://issues.apache.org/jira/browse/HADOOP-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723193#comment-15723193 ] Ted Yu commented on HADOOP-12927: - Logged HADOOP-13866 > Update netty-all to 4.0.34.Final > > > Key: HADOOP-12927 > URL: https://issues.apache.org/jira/browse/HADOOP-12927 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan > > Pull request: https://github.com/apache/hadoop/pull/84 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final
Ted Yu created HADOOP-13866: --- Summary: Upgrade netty-all to 4.1.1.Final Key: HADOOP-13866 URL: https://issues.apache.org/jira/browse/HADOOP-13866 Project: Hadoop Common Issue Type: Improvement Reporter: Ted Yu netty-all 4.1.1.Final is stable release which we should upgrade to. See bottom of HADOOP-12927 for related discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12927) Update netty-all to 4.0.34.Final
[ https://issues.apache.org/jira/browse/HADOOP-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723185#comment-15723185 ] Haibo Chen commented on HADOOP-12927: - [~ted_yu] I think we can close this as not applicable and create a new jira. > Update netty-all to 4.0.34.Final > > > Key: HADOOP-12927 > URL: https://issues.apache.org/jira/browse/HADOOP-12927 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan > > Pull request: https://github.com/apache/hadoop/pull/84 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-12927) Update netty-all to 4.0.34.Final
[ https://issues.apache.org/jira/browse/HADOOP-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723073#comment-15723073 ] Ted Yu edited comment on HADOOP-12927 at 12/5/16 7:28 PM: -- We can upgrade to 4.1.1.Final version. Let me know whether a new JIRA is needed for 4.1.1.Final version was (Author: yuzhih...@gmail.com): We can upgrade to 4.1.1.Final version. > Update netty-all to 4.0.34.Final > > > Key: HADOOP-12927 > URL: https://issues.apache.org/jira/browse/HADOOP-12927 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan > > Pull request: https://github.com/apache/hadoop/pull/84 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12927) Update netty-all to 4.0.34.Final
[ https://issues.apache.org/jira/browse/HADOOP-12927?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723073#comment-15723073 ] Ted Yu commented on HADOOP-12927: - We can upgrade to 4.1.1.Final version. > Update netty-all to 4.0.34.Final > > > Key: HADOOP-12927 > URL: https://issues.apache.org/jira/browse/HADOOP-12927 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 2.7.2 >Reporter: Hendy Irawan > > Pull request: https://github.com/apache/hadoop/pull/84 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723071#comment-15723071 ] John Zhuge commented on HADOOP-13847: - Thanks [~xiaochen], [~yzhangal], and [~anthony.young-gar...@cloudera.com]. > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13847.001.patch, HADOOP-13847.002.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15723025#comment-15723025 ] Steve Loughran commented on HADOOP-13865: - Really? That's very unexpected. Which version of Hive are you using? Because I'm running spark code with Hive 1.2.1 on the CP, and I can confirm, there is no DistCP in there. Which means either it was in a much older version, or someone has gone and added it as a dependency. > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722972#comment-15722972 ] Akira Ajisaka commented on HADOOP-13835: +1, thanks [~vvasudev] for updating the patch. > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722952#comment-15722952 ] Hadoop QA commented on HADOOP-13835: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 21s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 9m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 22s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 16s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 34s{color} | {color:red} The patch generated 2 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}173m 58s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.timeline.webapp.TestTimelineWebServices | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HADOOP-13835 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841761/HADOOP-13835.007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux c8d75a4ea6ea 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f885160 | | Default Java | 1.8.0_111 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/11198/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/11198/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/11198/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-nativetask . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/11198/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Move Google Test Framework code from mapreduce to hadoop-common >
[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722930#comment-15722930 ] Aaron Fabbri commented on HADOOP-13075: --- I'm interested in getting this stuff in. We can probably create a new patch based on latest branch-2 code, and do some testing, if that helps. > Add support for SSE-KMS and SSE-C in s3a filesystem > --- > > Key: HADOOP-13075 > URL: https://issues.apache.org/jira/browse/HADOOP-13075 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Andrew Olson >Assignee: Federico Czerwinski > > S3 provides 3 types of server-side encryption [1], > * SSE-S3 (Amazon S3-Managed Keys) [2] > * SSE-KMS (AWS KMS-Managed Keys) [3] > * SSE-C (Customer-Provided Keys) [4] > Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 > (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. > With native support in aws-java-sdk already available it should be fairly > straightforward [6],[7] to support the other two types of SSE with some > additional fs.s3a configuration properties. > [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html > [2] > http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html > [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html > [4] > http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html > [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html > [6] > http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java > [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13847: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-2 and branch-2.8. Thanks [~anthony.young-gar...@cloudera.com] for reporting this issue, [~jzhuge] for the fix and [~yzhangal] for the review! > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13847.001.patch, HADOOP-13847.002.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13847) KMSWebApp should close KeyProviderCryptoExtension
[ https://issues.apache.org/jira/browse/HADOOP-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722814#comment-15722814 ] Xiao Chen commented on HADOOP-13847: +1, committing this. > KMSWebApp should close KeyProviderCryptoExtension > - > > Key: HADOOP-13847 > URL: https://issues.apache.org/jira/browse/HADOOP-13847 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Anthony Young-Garner >Assignee: John Zhuge > Attachments: HADOOP-13847.001.patch, HADOOP-13847.002.patch > > > KeyProviderCryptoExtension should be closed in KMSWebApp.contextDestroyed so > that all KeyProviders are also closed. See related HADOOP-13838. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722734#comment-15722734 ] Sean Busbey commented on HADOOP-11804: -- this weekend I found an issue while setting up some integration tests. should have an update later today. > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, > HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, > HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, > HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722602#comment-15722602 ] Allen Wittenauer commented on HADOOP-13865: --- bq. Will it cause any problmes for adding tool jars to classpath? Yes, it does. There are reasons why this isn't being done already. This is why in hadoop 3.x how the shell scripts handle hadoop tools is completely revamped (see HADOOP_OPTIONAL_TOOLS and associated code). There are no more "add the entire directory to the classpath" bits anymore. In branch-2, users can add whatever jars they want in the default classpath by modifying various environment variables in hadoop-env.sh. Rather than having us force this upon, they can inflict whatever level of pain they can tolerate. > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13865: -- Resolution: Duplicate Status: Resolved (was: Patch Available) Closing this as a duplicate of HADOOP-12721. > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11804) POC Hadoop Client w/o transitive dependencies
[ https://issues.apache.org/jira/browse/HADOOP-11804?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Busbey updated HADOOP-11804: - Status: In Progress (was: Patch Available) > POC Hadoop Client w/o transitive dependencies > - > > Key: HADOOP-11804 > URL: https://issues.apache.org/jira/browse/HADOOP-11804 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Sean Busbey >Assignee: Sean Busbey > Attachments: HADOOP-11804.1.patch, HADOOP-11804.10.patch, > HADOOP-11804.2.patch, HADOOP-11804.3.patch, HADOOP-11804.4.patch, > HADOOP-11804.5.patch, HADOOP-11804.6.patch, HADOOP-11804.7.patch, > HADOOP-11804.8.patch, HADOOP-11804.9.patch, hadoop-11804-client-test.tar.gz > > > make a hadoop-client-api and hadoop-client-runtime that i.e. HBase can use to > talk with a Hadoop cluster without seeing any of the implementation > dependencies. > see proposal on parent for details. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Varun Vasudev updated HADOOP-13835: --- Attachment: HADOOP-13835.007.patch Fix path for system dirs in CMakeLists.txt > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch, HADOOP-13835.007.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13786) add output committer which uses s3guard for consistent commits to S3
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722345#comment-15722345 ] Steve Loughran commented on HADOOP-13786: - SPARK-18512 highlights a consistency problem surfacing during the mergePaths treewalk > add output committer which uses s3guard for consistent commits to S3 > > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13786) add output committer which uses s3guard for consistent commits to S3
[ https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13786: Summary: add output committer which uses s3guard for consistent commits to S3 (was: add output committer which uses s3guard for consistent O(1) commits to S3) > add output committer which uses s3guard for consistent commits to S3 > > > Key: HADOOP-13786 > URL: https://issues.apache.org/jira/browse/HADOOP-13786 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0-alpha2 >Reporter: Steve Loughran >Assignee: Steve Loughran > > A goal of this code is "support O(1) commits to S3 repositories in the > presence of failures". Implement it, including whatever is needed to > demonstrate the correctness of the algorithm. (that is, assuming that s3guard > provides a consistent view of the presence/absence of blobs, show that we can > commit directly). > I consider ourselves free to expose the blobstore-ness of the s3 output > streams (ie. not visible until the close()), if we need to use that to allow > us to abort commit operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722238#comment-15722238 ] Fei Hui commented on HADOOP-13865: -- hi [~ste...@apache.org] Not only on the CLI, but also user applications using hadoop tools in their source code. Hadoop-aws ,hadoop-azure and other tools have been involved in hadoop, its meaningful for users using hadoop easily If add tools to classpath, then CLASSPATH=${HADOOP_HOME}/share/hadoop/tools/*:share/hadoop/tools/lib/*:$CLASSPATH. Maybe CP is not so long. > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722238#comment-15722238 ] Fei Hui edited comment on HADOOP-13865 at 12/5/16 1:14 PM: --- hi [~ste...@apache.org] Not only on the CLI, but also user applications using hadoop tools in their source code. Hadoop-aws ,hadoop-azure and other tools have been involved in hadoop, its meaningful for users using hadoop easily If add tools to classpath, then CLASSPATH=${HADOOP_HOME}/share/hadoop/tools/*:${HADOOP_HOME}/share/hadoop/tools/lib/*:$CLASSPATH. Maybe CP is not so long. was (Author: ferhui): hi [~ste...@apache.org] Not only on the CLI, but also user applications using hadoop tools in their source code. Hadoop-aws ,hadoop-azure and other tools have been involved in hadoop, its meaningful for users using hadoop easily If add tools to classpath, then CLASSPATH=${HADOOP_HOME}/share/hadoop/tools/*:share/hadoop/tools/lib/*:$CLASSPATH. Maybe CP is not so long. > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15722174#comment-15722174 ] Fei Hui commented on HADOOP-13865: -- hi [~ste...@apache.org] In hive source code Hadoop23Shims.java, it calls DistCp like this public boolean runDistCp(Path src, Path dst, Configuration conf) throws IOException { DistCpOptions options = new DistCpOptions(Collections.singletonList(src), dst); options.setSyncFolder(true); options.setSkipCRC(true); options.preserve(FileAttribute.BLOCKSIZE); try { conf.setBoolean("mapred.mapper.new-api", true); DistCp distcp = new DistCp(conf, options); distcp.execute(); return true; } catch (Exception e) { throw new IOException("Cannot execute DistCp process: " + e, e); } finally { conf.setBoolean("mapred.mapper.new-api", false); } } So i encounter the error 'java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions' And i can solve the problem by setting HADOOP_CLASS Because maybe many users encounter the problems, and maybe they spend much time to solve it, so i open the issue and submit patch i > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13863) Hadoop - Azure: Add a new SAS key mode for WASB.
[ https://issues.apache.org/jira/browse/HADOOP-13863?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721980#comment-15721980 ] Steve Loughran commented on HADOOP-13863: - + [~lmccay] > Hadoop - Azure: Add a new SAS key mode for WASB. > > > Key: HADOOP-13863 > URL: https://issues.apache.org/jira/browse/HADOOP-13863 > Project: Hadoop Common > Issue Type: Improvement > Components: azure, fs/azure >Affects Versions: 2.8.0 >Reporter: Dushyanth >Assignee: Dushyanth > Attachments: HADOOP-13863.001.patch, WASB-SAS Key Mode-Design > Proposal.pdf > > > Current implementation of WASB, only supports Azure storage keys and SAS key > being provided via org.apache.hadoop.conf.Configuration, which results in > these secrets residing in the same address space as the WASB process and > providing complete access to the Azure storage account and its containers. > Added to the fact that WASB does not inherently support ACL's, WASB is its > current implementation cannot be securely used for environments like secure > hadoop cluster. This JIRA is created to add a new mode in WASB, which > operates on Azure Storage SAS keys, which can provide fine grained timed > access to containers and blobs, providing a segway into supporting WASB for > secure hadoop cluster. > More details about the issue and the proposal are provided in the design > proposal document. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13865) add tools to classpath by default in branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721974#comment-15721974 ] Steve Loughran commented on HADOOP-13865: - I have mixed feelings here. I like bits on the CLI, and indeed think we (hortonworks) stick some more of the hadoop-aws and hadoop-azure stuff on our CP, albeit by copying them to somewhere on that path At the same time, the fact that we bleed so much of our CP into downstream programs makes it a dangerous minefield about updating anything; the size of that CP means that it's inevitable that we break things whenever we do —so are trapped into shipping out of date stuff (Guava, Jackson) to minimise this pain (see HADOOP-9991). Looking at your specific problem: distcp runs on the `hadoop distcp` command. Why exactly were you trying to use it from hive? > add tools to classpath by default in branch-2 > - > > Key: HADOOP-13865 > URL: https://issues.apache.org/jira/browse/HADOOP-13865 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 2.8.0, 2.7.3 >Reporter: Fei Hui >Assignee: Fei Hui > Attachments: HADOOP-13865-branch-2.001.patch > > > when i run hive queries, i get errors as follow > java.lang.NoClassDefFoundError: org/apache/hadoop/tools/DistCpOptions > ... > Maybe run other hadoop apps which using hadoop tools classes, will get > similar erros -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13835) Move Google Test Framework code from mapreduce to hadoop-common
[ https://issues.apache.org/jira/browse/HADOOP-13835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15721866#comment-15721866 ] Akira Ajisaka commented on HADOOP-13835: {code} # add gtest as system library to suppress gcc warnings include_directories(SYSTEM ${GTEST_SRC_DIR}/gtest/include) {code} {{$\{GTEST_SRC_DIR\}/gtest/include}} should be {{$\{GTEST_SRC_DIR\}/include}}? > Move Google Test Framework code from mapreduce to hadoop-common > --- > > Key: HADOOP-13835 > URL: https://issues.apache.org/jira/browse/HADOOP-13835 > Project: Hadoop Common > Issue Type: Task >Reporter: Varun Vasudev >Assignee: Varun Vasudev > Attachments: HADOOP-13835.001.patch, HADOOP-13835.002.patch, > HADOOP-13835.003.patch, HADOOP-13835.004.patch, HADOOP-13835.005.patch, > HADOOP-13835.006.patch > > > The mapreduce project has Google Test Framework code to allow testing of > native libraries. This should be moved to hadoop-common so that other > projects can use it as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org