[jira] [Updated] (HADOOP-13218) Migrate other Hadoop side tests to prepare for removing WritableRPCEngine
[ https://issues.apache.org/jira/browse/HADOOP-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-13218: --- Assignee: Wei Zhou (was: Kai Zheng) > Migrate other Hadoop side tests to prepare for removing WritableRPCEngine > - > > Key: HADOOP-13218 > URL: https://issues.apache.org/jira/browse/HADOOP-13218 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Wei Zhou > Attachments: HADOOP-13218-v01.patch > > > Patch for HADOOP-12579 contains lots of work to migrate the left Hadoop side > tests basing on the new RPC engine and nice cleanups. HADOOP-12579 will be > reverted to allow some time for YARN/Mapreduce side related changes, open > this to recommit most of the test related work in HADOOP-12579 for easier > tracking and maintain, as other sub-tasks did. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13570) Hadoop Swift driver should use new Apache httpclient
[ https://issues.apache.org/jira/browse/HADOOP-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated HADOOP-13570: - Summary: Hadoop Swift driver should use new Apache httpclient (was: Hadoop swift Driver should use new Apache httpclient) > Hadoop Swift driver should use new Apache httpclient > > > Key: HADOOP-13570 > URL: https://issues.apache.org/jira/browse/HADOOP-13570 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/swift >Affects Versions: 2.7.3, 2.6.4 >Reporter: Chen He > > Current Hadoop openstack module is still using apache httpclient v1.x. It is > too old. We need to update it to a higher version to catch up in performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13570) Hadoop swift Driver should use new Apache httpclient
Chen He created HADOOP-13570: Summary: Hadoop swift Driver should use new Apache httpclient Key: HADOOP-13570 URL: https://issues.apache.org/jira/browse/HADOOP-13570 Project: Hadoop Common Issue Type: New Feature Affects Versions: 2.6.4, 2.7.3 Reporter: Chen He Current Hadoop openstack module is still using apache httpclient v1.x. It is too old. We need to update it to a higher version to catch up the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13570) Hadoop swift Driver should use new Apache httpclient
[ https://issues.apache.org/jira/browse/HADOOP-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated HADOOP-13570: - Component/s: fs/swift > Hadoop swift Driver should use new Apache httpclient > > > Key: HADOOP-13570 > URL: https://issues.apache.org/jira/browse/HADOOP-13570 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/swift >Affects Versions: 2.7.3, 2.6.4 >Reporter: Chen He > > Current Hadoop openstack module is still using apache httpclient v1.x. It is > too old. We need to update it to a higher version to catch up the performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13570) Hadoop swift Driver should use new Apache httpclient
[ https://issues.apache.org/jira/browse/HADOOP-13570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chen He updated HADOOP-13570: - Description: Current Hadoop openstack module is still using apache httpclient v1.x. It is too old. We need to update it to a higher version to catch up in performance. (was: Current Hadoop openstack module is still using apache httpclient v1.x. It is too old. We need to update it to a higher version to catch up the performance.) > Hadoop swift Driver should use new Apache httpclient > > > Key: HADOOP-13570 > URL: https://issues.apache.org/jira/browse/HADOOP-13570 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/swift >Affects Versions: 2.7.3, 2.6.4 >Reporter: Chen He > > Current Hadoop openstack module is still using apache httpclient v1.x. It is > too old. We need to update it to a higher version to catch up in performance. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454289#comment-15454289 ] Hadoop QA commented on HADOOP-13375: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 33s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13375 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826548/HADOOP-13375.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e1604d91ea49 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6f4b0d3 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10434/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10434/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch, HADOOP-13375.005.patch,
[jira] [Updated] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-13375: - Attachment: HADOOP-13375.006.patch > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch, HADOOP-13375.005.patch, > HADOOP-13375.006.patch > > > h5. Error Message > bq. expected:<1> but was:<0> > h5. Stacktrace > {quote} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454217#comment-15454217 ] Weiwei Yang commented on HADOOP-13375: -- The error in mvninstall is (from component hadoop-yarn-server-timelineservice) {code} Failed to execute goal org.apache.maven.plugins:maven-remote-resources-plugin:1.5:process (default) on project hadoop-yarn-server-timelineservice: Error resolving project artifact: Could not transfer artifact sqlline:sqlline:pom:1.1.8 from/to repository.jboss.org {code} is this a jenkins issue? Let me submit a new (but same) patch to trigger a new jenkins job. See if this only happens this time. > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch, HADOOP-13375.005.patch > > > h5. Error Message > bq. expected:<1> but was:<0> > h5. Stacktrace > {quote} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15454155#comment-15454155 ] Hadoop QA commented on HADOOP-13375: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 5m 40s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13375 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826542/HADOOP-13375.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 275a8f119c3c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6f4b0d3 | | Default Java | 1.8.0_101 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/10433/artifact/patchprocess/branch-mvninstall-root.txt | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10433/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10433/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang >
[jira] [Updated] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-13375: - Attachment: HADOOP-13375.005.patch > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch, HADOOP-13375.005.patch > > > h5. Error Message > bq. expected:<1> but was:<0> > h5. Stacktrace > {quote} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453734#comment-15453734 ] Aaron Fabbri commented on HADOOP-13345: --- Thank you for the feedback [~eddyxu]. Let me give an example why I think the {{fullyCached}} or {{isAuthoritative}} flag is required for return value from {{MetadataStore.listChildren()}}. Assume we have an existing s3 bucket that contains these files: /a/b/file0 /a/b/file1 Now, assume we start up a Hadoop cluster, with s3guard configured for the {{MetadataStore}} to be authoritative, and do the following operations: create(/a/b/file2) listStatus(/a/b) In this case we have to query both the MetadataStore, and the s3 backend, as /a/b/file2 visibility may be subject to eventual consistency. Also the MetadataStore only knows about /a/b/file2, so the client has to consult s3 to learn about file0 and file1. In the listStatus() above, {{MetadataStore.listChildren(/a/b)}} will return {{(("/a/b/file2"), isAuthoritative=false)}}, since the MetadataStore did not get a {{put()}} with {{isAuthoritative=true}}, nor did it see a {{mkdir(/a/b)}} happen. Two examples where {{MetadataStore.listChildren()}} would return a result with {{isAuthoritative=true}}: 1. mkdir(/a/b/c) create(/a/b/c/fileA) listStatus(/a/b/c) Here, since the metadata store saw the creation of /a/b/c, it knows that it has observed all creations and deletions inside the /a/b/c directory. 2. Extending the original example: Existed before cluster startup: /a/b/file0 /a/b/file1 Then with cluster, we see: create(/a/b/file2) listStatus(/a/b) listStatus(/a/b) The first call to listStatus(/a/b) will have to fetch the full directory contents from s3 since {{MetadataStore.listChildren(/a/b)}} will return {{isAuthoritative=false}}. Once the client gets the full listing from s3, it can call {{MetadataStore.put(('a/b/file0', '/a/b/file1', '/a/b/file2'), isAuthoritative=true)}}. *Now*, the MetadataStore has been told it has full contents of /a/b, and the second call to listStatus(/a/b) above will see the MetadataStore return {{('a/b/file0', '/a/b/file1', '/a/b/file2'), isAuthoritative=true)}} > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453539#comment-15453539 ] Lei (Eddy) Xu edited comment on HADOOP-13345 at 8/31/16 11:14 PM: -- Hi, [~fabbri] thanks for these great suggestions here. One question here is: * Can we consider {{fullycache.directories == true iff metadatastore.allow.authoritative == true}}? If we combine them together, case 2 of {{fullycache.directories}} should not happen. bq. as the MetadataStore will always return results marked as non-authoritative. If we have this flag, we might not need to mark results as well. So I think the code like following can make the things simpler: {code} List subFiles = metadataStore.get(path); if (!metadataStore.isAuthoritive()) { List s3Files = s3.listDir(path); // merge subfile and s3Files... } {code} What do you think? was (Author: eddyxu): Hi, [~fabbri] thanks for these great suggestions here. One question here is: * Can we consider {{fullycache.directories == true iff metadatastore.allow.authoritative == true}}? If we combine them together, case 2 of {{fullycache.directories}} should not happen. bq. as the MetadataStore will always return results marked as non-authoritative. If we have this flag, we might not need to mark results as well. So I think the code like following can make the things simpler: {code} List subFiles = metadataStore.get(path); if (metadataStore.isAuthoritive()) { List s3Files = s3.listDir(path); // merge subfile and s3Files... } {code} What do you think? > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453539#comment-15453539 ] Lei (Eddy) Xu commented on HADOOP-13345: Hi, [~fabbri] thanks for these great suggestions here. One question here is: * Can we consider {{fullycache.directories == true iff metadatastore.allow.authoritative == true}}? If we combine them together, case 2 of {{fullycache.directories}} should not happen. bq. as the MetadataStore will always return results marked as non-authoritative. If we have this flag, we might not need to mark results as well. So I think the code like following can make the things simpler: {code} List subFiles = metadataStore.get(path); if (metadataStore.isAuthoritive()) { List s3Files = s3.listDir(path); // merge subfile and s3Files... } {code} What do you think? > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request
[ https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453521#comment-15453521 ] Hadoop QA commented on HADOOP-13565: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 12s{color} | {color:orange} hadoop-common-project/hadoop-auth: The patch generated 1 new + 28 unchanged - 0 fixed = 29 total (was 28) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 3m 15s{color} | {color:red} hadoop-auth in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 28m 38s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.authentication.util.TestZKSignerSecretProvider | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13565 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826502/HADOOP-13565.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 67d7f472045a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 85bab5f | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10432/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-auth.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10432/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-auth.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10432/testReport/ | | modules | C: hadoop-common-project/hadoop-auth U: hadoop-common-project/hadoop-auth | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10432/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically
[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453498#comment-15453498 ] Hadoop QA commented on HADOOP-13535: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 33s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 42s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13535 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826499/HADOOP-13535.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 66fbe7ec2d0a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 01721dd | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10431/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10431/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add jetty6 acceptor startup issue workaround to branch-2 > > > Key: HADOOP-13535 > URL: https://issues.apache.org/jira/browse/HADOOP-13535 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Min Shen > Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch, >
[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request
[ https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13565: Status: Patch Available (was: Open) > KerberosAuthenticationHandler#authenticate should not rebuild SPN based on > client request > - > > Key: HADOOP-13565 > URL: https://issues.apache.org/jira/browse/HADOOP-13565 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13565.00.patch > > > In KerberosAuthenticationHandler#authenticate, we use canonicalized server > name derived from HTTP request to build server SPN and authenticate client. > This can be problematic if the HTTP client/server are running from a > non-local Kerberos realm that the local realm has trust with (e.g., NN UI). > For example, > The server is running its HTTP endpoint using SPN from the client realm: > hadoop.http.authentication.kerberos.principal > HTTP/_HOST/TEST.COM > When client sends request to namenode at example@example.com with > http://NN.example.com:50070 from somehost.test@test.com. > The client talks to KDC first and gets a service ticket > HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO > negotiation. > The authentication will end up with either no valid credential error or > checksum failure depending on the HTTP client naming resolution or HTTP > header of Host specified by the browser. > The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will > return a SPN with local realm (HTTP/nn.example@example.com) no matter > the server login SPN is from that domain or not. > The proposed fix is to change to use default server login principle (by > passing null as the 1st parameter to gssManager.createCredential()) instead. > This way we avoid dependency on HTTP client behavior (Host header or name > resolution like CNAME) or assumption on the local realm. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13565) KerberosAuthenticationHandler#authenticate should not rebuild SPN based on client request
[ https://issues.apache.org/jira/browse/HADOOP-13565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13565: Attachment: HADOOP-13565.00.patch Attach a initial patch for discussion. > KerberosAuthenticationHandler#authenticate should not rebuild SPN based on > client request > - > > Key: HADOOP-13565 > URL: https://issues.apache.org/jira/browse/HADOOP-13565 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13565.00.patch > > > In KerberosAuthenticationHandler#authenticate, we use canonicalized server > name derived from HTTP request to build server SPN and authenticate client. > This can be problematic if the HTTP client/server are running from a > non-local Kerberos realm that the local realm has trust with (e.g., NN UI). > For example, > The server is running its HTTP endpoint using SPN from the client realm: > hadoop.http.authentication.kerberos.principal > HTTP/_HOST/TEST.COM > When client sends request to namenode at example@example.com with > http://NN.example.com:50070 from somehost.test@test.com. > The client talks to KDC first and gets a service ticket > HTTP/NN1.example.com/TEST.COM to authenticate with the server via SPNEGO > negotiation. > The authentication will end up with either no valid credential error or > checksum failure depending on the HTTP client naming resolution or HTTP > header of Host specified by the browser. > The root cause is KerberosUtil.getServicePrincipal("HTTP", serverName)}} will > return a SPN with local realm (HTTP/nn.example@example.com) no matter > the server login SPN is from that domain or not. > The proposed fix is to change to use default server login principle (by > passing null as the 1st parameter to gssManager.createCredential()) instead. > This way we avoid dependency on HTTP client behavior (Host header or name > resolution like CNAME) or assumption on the local realm. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Min Shen updated HADOOP-13535: -- Attachment: HADOOP-13535.003.patch > Add jetty6 acceptor startup issue workaround to branch-2 > > > Key: HADOOP-13535 > URL: https://issues.apache.org/jira/browse/HADOOP-13535 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Min Shen > Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch, > HADOOP-13535.003.patch > > > After HADOOP-12765 is committed to branch-2, the handling of SSL connection > by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We > should consider adding the same workaround for SSL connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-13535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453356#comment-15453356 ] Wei-Chiu Chuang commented on HADOOP-13535: -- I don't think a unit test is required in this case. > Add jetty6 acceptor startup issue workaround to branch-2 > > > Key: HADOOP-13535 > URL: https://issues.apache.org/jira/browse/HADOOP-13535 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Wei-Chiu Chuang >Assignee: Min Shen > Attachments: HADOOP-13535.001.patch, HADOOP-13535.002.patch > > > After HADOOP-12765 is committed to branch-2, the handling of SSL connection > by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We > should consider adding the same workaround for SSL connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453330#comment-15453330 ] Mingliang Liu commented on HADOOP-13375: The v4 patch looks good. +1 once the following minor comments are addressed/considered: # Should the latch be volatile? # {{if(latch != null && latch.getCount() > 0) {}} seems we don't have to check the {{getCount()}} before {{await()}} # {{// After 120ms all should have completed running}} can be deleted as it's not true in the current patch any longer > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch > > > h5. Error Message > bq. expected:<1> but was:<0> > h5. Stacktrace > {quote} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13365) Convert _OPTS to arrays to enable spaces in file paths
[ https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13365: -- Summary: Convert _OPTS to arrays to enable spaces in file paths (was: Convert _OPTS to arrays) > Convert _OPTS to arrays to enable spaces in file paths > -- > > Key: HADOOP-13365 > URL: https://issues.apache.org/jira/browse/HADOOP-13365 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13365-HADOOP-13341.00.patch > > > While we are mucking with all of the _OPTS variables, this is a good time to > convert them to arrays so that filesystems with spaces in them can be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13475) Adding Append Blob support for WASB
[ https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15453168#comment-15453168 ] Raul da Silva Martins commented on HADOOP-13475: [~cnauroth] gentle ping for a review and merge. Thank you! > Adding Append Blob support for WASB > --- > > Key: HADOOP-13475 > URL: https://issues.apache.org/jira/browse/HADOOP-13475 > Project: Hadoop Common > Issue Type: New Feature > Components: azure >Affects Versions: 2.7.1 >Reporter: Raul da Silva Martins >Assignee: Raul da Silva Martins >Priority: Critical > Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch, > HADOOP-13475.001.patch, HADOOP-13475.002.patch > > > Currently the WASB implementation of the HDFS interface does not support the > utilization of Azure AppendBlobs underneath. As owners of a large scale > service who intend to start writing to Append blobs, we need this support in > order to be able to keep using our HDI capabilities. > This JIRA is added to implement Azure AppendBlob support to WASB. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452707#comment-15452707 ] Hadoop QA commented on HADOOP-13375: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 18s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13375 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826451/HADOOP-13375.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux e80962b71875 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20ae1fa | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10430/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10430/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10430/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >
[jira] [Updated] (HADOOP-13567) S3AFileSystem to override getStorageStatistics() and so serve up its statistics
[ https://issues.apache.org/jira/browse/HADOOP-13567?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13567: Summary: S3AFileSystem to override getStorageStatistics() and so serve up its statistics (was: S3AFileSystem to override getStoragetStatistics() and so serve up its statistics) > S3AFileSystem to override getStorageStatistics() and so serve up its > statistics > --- > > Key: HADOOP-13567 > URL: https://issues.apache.org/jira/browse/HADOOP-13567 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > Although S3AFileSystem collects lots of statistics, these aren't available > programatically as {{getStoragetStatistics() }} isn't overridden. > It must be overridden and serve up the local FS stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452609#comment-15452609 ] Hadoop QA commented on HADOOP-13341: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 4 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 6s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 12s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 9s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 48s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 41s{color} | {color:green} hadoop-yarn in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-streaming in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-archive-logs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-rumen in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 21s{color} | {color:green} hadoop-extras in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-sls in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 6s{color} | {color:green} hadoop-mapreduce-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 28s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13341 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826443/HADOOP-13341.00.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux f05c5a65d9d5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20ae1fa | | shellcheck | v0.4.4 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10429/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-tools/hadoop-streaming hadoop-tools/hadoop-distcp hadoop-tools/hadoop-archive-logs hadoop-tools/hadoop-rumen hadoop-tools/hadoop-extras hadoop-tools/hadoop-sls hadoop-mapreduce-project U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10429/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS >
[jira] [Commented] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452574#comment-15452574 ] Larry McCay commented on HADOOP-13556: -- Thanks, [~cnauroth] - that is helpful! I know that I used to be able to do that - something must have changed. Manual options is great though! > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter: Larry McCay >Assignee: Larry McCay > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch > > > Current implementation of getPropsWithPrefix uses the > Configuration.iterator() method. This method is not threadsafe. > This patch will reimplement the gathering of properties that begin with a > prefix by using the safe getProps() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452574#comment-15452574 ] Larry McCay edited comment on HADOOP-13556 at 8/31/16 3:44 PM: --- Thanks, [~cnauroth] - that is helpful! I know that I used to be able to do that - something must have changed. Manual option is great though! was (Author: lmccay): Thanks, [~cnauroth] - that is helpful! I know that I used to be able to do that - something must have changed. Manual options is great though! > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter: Larry McCay >Assignee: Larry McCay > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch > > > Current implementation of getPropsWithPrefix uses the > Configuration.iterator() method. This method is not threadsafe. > This patch will reimplement the gathering of properties that begin with a > prefix by using the safe getProps() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13375) o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky
[ https://issues.apache.org/jira/browse/HADOOP-13375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HADOOP-13375: - Attachment: HADOOP-13375.004.patch Hello [~liuml07] Thanks for the comments. Well I can't say that the case in your concern won't happen, it may fail that way in slightly chance. So I modified the {{delayIfNecessary}} method a little bit to let it be able to {{pause}} and wait for notification (using a latch). In this way we will not need to sleep anymore and just pause the thread, do verify, resume the thread, then verify again. Please help to review v4 patch, and let me know if it looks good or not. Thanks a lot. > o.a.h.security.TestGroupsCaching.testBackgroundRefreshCounters seems flaky > -- > > Key: HADOOP-13375 > URL: https://issues.apache.org/jira/browse/HADOOP-13375 > Project: Hadoop Common > Issue Type: Bug > Components: security, test >Affects Versions: 2.8.0 >Reporter: Mingliang Liu >Assignee: Weiwei Yang > Attachments: HADOOP-13375.001.patch, HADOOP-13375.002.patch, > HADOOP-13375.003.patch, HADOOP-13375.004.patch > > > h5. Error Message > bq. expected:<1> but was:<0> > h5. Stacktrace > {quote} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.security.TestGroupsCaching.testBackgroundRefreshCounters(TestGroupsCaching.java:638) > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()
[ https://issues.apache.org/jira/browse/HADOOP-13569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452550#comment-15452550 ] Steve Loughran commented on HADOOP-13569: - What the output would look like {code} 2016-08-31 16:22:15,353 [JUnit] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:test_010_CreateHugeFile(142)) - [100%] Written 256.00 MB out of 256.00 MB; PUT = 209715200 bytes in 2 operations 2016-08-31 16:22:15,353 [JUnit] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:test_010_CreateHugeFile(154)) - Closing file and completing write operation 2016-08-31 16:22:15,372 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event TRANSFER_PART_STARTED_EVENT, bytes: 0 2016-08-31 16:22:15,372 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event CLIENT_REQUEST_STARTED_EVENT, bytes: 0 2016-08-31 16:22:15,373 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_REQUEST_STARTED_EVENT, bytes: 0 2016-08-31 16:23:40,083 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_REQUEST_COMPLETED_EVENT, bytes: 0 2016-08-31 16:23:40,083 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_STARTED_EVENT, bytes: 0 2016-08-31 16:23:40,084 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_COMPLETED_EVENT, bytes: 0 2016-08-31 16:23:40,084 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event CLIENT_REQUEST_SUCCESS_EVENT, bytes: 0 2016-08-31 16:23:40,084 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event TRANSFER_PART_COMPLETED_EVENT, bytes: 0 2016-08-31 16:24:42,754 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_REQUEST_COMPLETED_EVENT, bytes: 0 2016-08-31 16:24:42,754 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_STARTED_EVENT, bytes: 0 2016-08-31 16:24:42,755 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_COMPLETED_EVENT, bytes: 0 2016-08-31 16:24:42,755 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event CLIENT_REQUEST_SUCCESS_EVENT, bytes: 0 2016-08-31 16:24:42,755 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event TRANSFER_PART_COMPLETED_EVENT, bytes: 0 2016-08-31 16:25:08,954 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_REQUEST_COMPLETED_EVENT, bytes: 0 2016-08-31 16:25:08,954 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_STARTED_EVENT, bytes: 0 2016-08-31 16:25:08,954 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event HTTP_RESPONSE_COMPLETED_EVENT, bytes: 0 2016-08-31 16:25:08,955 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event CLIENT_REQUEST_SUCCESS_EVENT, bytes: 0 2016-08-31 16:25:08,955 [java-sdk-progress-listener-callback-thread] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:progressChanged(199)) - Event TRANSFER_PART_COMPLETED_EVENT, bytes: 0 2016-08-31 16:25:09,327 [JUnit] INFO contract.ContractTestUtils (ContractTestUtils.java:end(1365)) - Duration of Time to close() output stream: 173,972,783,472 nS 2016-08-31 16:25:09,327 [JUnit] INFO contract.ContractTestUtils (ContractTestUtils.java:end(1365)) - Duration of Time to write 256 MB in blocks of 65536: 174,530,704,251 nS 2016-08-31 16:25:09,328 [JUnit] INFO scale.STestS3AHugeFileCreate (STestS3AHugeFileCreate.java:logFSState(232)) - File System state after operation: S3AFileSystem{ inputPolicy=normal, partSize=104857600, enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, blockSize=33554432,
[jira] [Created] (HADOOP-13569) S3AFastOutputStream to take ProgressListener in file create()
Steve Loughran created HADOOP-13569: --- Summary: S3AFastOutputStream to take ProgressListener in file create() Key: HADOOP-13569 URL: https://issues.apache.org/jira/browse/HADOOP-13569 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.8.0 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor For scale testing I'd like more meaningful progress than the Hadoop {{Progressable}} offers. Proposed: having {{S3AFastOutputStream}} check to see if the progressable passed in is also an instance of {{com.amazonaws.event.ProgressListener}} —and if so, wire it up directly. This allows tests to directly track state of upload, log it and perhaps even assert on it -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13541) explicitly declare the Joda time version S3A depends on
[ https://issues.apache.org/jira/browse/HADOOP-13541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452527#comment-15452527 ] Steve Loughran commented on HADOOP-13541: - -1 on tests are false alarm: this is a build time fix > explicitly declare the Joda time version S3A depends on > --- > > Key: HADOOP-13541 > URL: https://issues.apache.org/jira/browse/HADOOP-13541 > Project: Hadoop Common > Issue Type: Sub-task > Components: build, fs/s3 >Affects Versions: 2.8.0, 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13541-branch-2.8-001.patch > > > Different builds of Hadoop are pulling in wildly different versions of Joda > time, depending on what other transitive dependencies are involved. Example: > 2.7.3 is somehow picking up Joda time 2.9.4; branch-2.8 is actually behind on > 2.8.1. That's going to cause confusion when people upgrade from 2.7.x to 2.8 > and find a dependency has got older > I propose explicitly declaring a dependency on joda-time in s3a, then set the > version to 2.9.4; upgrades are things we can manage -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()
[ https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452519#comment-15452519 ] Hadoop QA commented on HADOOP-11223: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 0s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 43s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-11223 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12689236/HADOOP-11223.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 885b3a04a883 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20ae1fa | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10427/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10427/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10427/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10427/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Offer a read-only conf
[jira] [Updated] (HADOOP-13360) Documentation for HADOOP_subcommand_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13360: -- Resolution: Fixed Fix Version/s: HADOOP-13341 Status: Resolved (was: Patch Available) > Documentation for HADOOP_subcommand_OPTS > > > Key: HADOOP-13360 > URL: https://issues.apache.org/jira/browse/HADOOP-13360 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Fix For: HADOOP-13341 > > Attachments: HADOOP-13360-HADOOP-13341.00.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13341: -- Status: Patch Available (was: Open) > Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS > -- > > Key: HADOOP-13341 > URL: https://issues.apache.org/jira/browse/HADOOP-13341 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13341.00.patch > > > Big features like YARN-2928 demonstrate that even senior level Hadoop > developers forget that daemons need a custom _OPTS env var. We can replace > all of the custom vars with generic handling just like we do for the username > check. > For example, with generic handling in place: > || Old Var || New Var || > | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS | > | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS | > | n/a | YARN_TIMELINEREADER_OPTS | > | n/a | HADOOP_DISTCP_OPTS | > | n/a | MAPRED_DISTCP_OPTS | > | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS | > | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS | > | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS | > This makes it: > a) consistent across the entire project > b) consistent for every subcommand > c) eliminates almost all of the custom appending in the case statements > It's worth pointing out that subcommands like distcp that sometimes need a > higher than normal client-side heapsize or custom options are a huge win. > Combined with .hadooprc and/or dynamic subcommands, it means users can easily > do customizations based upon their needs without a lot of weirdo shell > aliasing or one line shell scripts off to the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13341: -- Attachment: HADOOP-13341.00.patch -00: * test merge of branch against trunk > Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS > -- > > Key: HADOOP-13341 > URL: https://issues.apache.org/jira/browse/HADOOP-13341 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13341.00.patch > > > Big features like YARN-2928 demonstrate that even senior level Hadoop > developers forget that daemons need a custom _OPTS env var. We can replace > all of the custom vars with generic handling just like we do for the username > check. > For example, with generic handling in place: > || Old Var || New Var || > | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS | > | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS | > | n/a | YARN_TIMELINEREADER_OPTS | > | n/a | HADOOP_DISTCP_OPTS | > | n/a | MAPRED_DISTCP_OPTS | > | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS | > | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS | > | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS | > This makes it: > a) consistent across the entire project > b) consistent for every subcommand > c) eliminates almost all of the custom appending in the case statements > It's worth pointing out that subcommands like distcp that sometimes need a > higher than normal client-side heapsize or custom options are a huge win. > Combined with .hadooprc and/or dynamic subcommands, it means users can easily > do customizations based upon their needs without a lot of weirdo shell > aliasing or one line shell scripts off to the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452486#comment-15452486 ] Hadoop QA commented on HADOOP-13556: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 23s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 285 unchanged - 0 fixed = 287 total (was 285) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 42m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13556 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825999/HADOOP-13556-002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 5601d3f799ee 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20ae1fa | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10426/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10426/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10426/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter:
[jira] [Commented] (HADOOP-13360) Documentation for HADOOP_subcommand_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452466#comment-15452466 ] Hadoop QA commented on HADOOP-13360: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s{color} | {color:green} HADOOP-13341 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} HADOOP-13341 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13360 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826419/HADOOP-13360-HADOOP-13341.00.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 7d616343fc64 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13341 / af9f0e7 | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10428/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Documentation for HADOOP_subcommand_OPTS > > > Key: HADOOP-13360 > URL: https://issues.apache.org/jira/browse/HADOOP-13360 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13360-HADOOP-13341.00.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13361) Modify hadoop_verify_user to be consistent with hadoop_subcommand_opts (ie more granularity)
[ https://issues.apache.org/jira/browse/HADOOP-13361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13361: -- Release Note: Users: In Apache Hadoop 3.0.0-alpha1, verification required environment variables with the format of HADOOP_(subcommand)_USER where subcommand was lowercase applied globally. This changes the format to be (command)_(subcommand)_USER where all are uppercase to be consistent with the _OPTS functionality as well as being able to set per-command options. Additionally, the check is now happening sooner, which should make it faster to fail. Developers: This changes hadoop_verify_user to require the program's name as part of the function call. This is incompatible with Apache Hadoop 3.0.0-alpha1. was: Users: In Apache Hadoop 3.0.0-alpha1, verification required environment variables with the format of HADOOP_(subcommand)_USER where subcommand was lowercase applied globally. This changes the format to be (command)_(subcommand)_USER where all are uppercase to be consistent with the _OPTS functionality as well as being able to set per-command options. Developers: This changes hadoop_verify_user to require the program's name as part of the function call. This is incompatible with Apache Hadoop 3.0.0-alpha1. > Modify hadoop_verify_user to be consistent with hadoop_subcommand_opts (ie > more granularity) > > > Key: HADOOP-13361 > URL: https://issues.apache.org/jira/browse/HADOOP-13361 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Fix For: HADOOP-13341 > > Attachments: HADOOP-13361-HADOOP-13341.00.patch > > > hadoop_verify_user should be consistent with hadoop_subcommand_opts so that > it looks/feels the same to end users. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13341: -- Hadoop Flags: Incompatible change Release Note: Users: * Ability to set per-command+sub-command options from the command line. * Makes daemon options consistent across the project. (See deprecation list below) * HADOOP\_CLIENT\_OPTS is now honored for every non-daemon sub-command. Prior to this change, many sub-commands did not use it. Developers: * No longer need to do custom handling for options in the case section of the shell scripts. * Consolidates all \_OPTS handling into hadoop-functions.sh to enable future projects. * All daemons running with secure mode features now get \_SECURE\_EXTRA\_OPTS support. \_OPTS Changes: | Old | New | |: |: | | HADOOP\_BALANCER\_OPTS | HDFS\_BALANCER\_OPTS | | HADOOP\_DATANODE\_OPTS | HDFS\_DATANODE\_OPTS | | HADOOP\_DN\_SECURE_EXTRA_OPTS | HDFS\_DATANODE\_SECURE\_EXTRA\_OPTS | | HADOOP\_JOB\_HISTORYSERVER\_OPTS | MAPRED\_HISTORYSERVER\_OPTS | | HADOOP\_JOURNALNODE\_OPTS | HDFS\_JOURNALNODE\_OPTS | | HADOOP\_MOVER\_OPTS | HDFS\_MOVER\_OPTS | | HADOOP\_NAMENODE\_OPTS | HDFS\_NAMENODE\_OPTS | | HADOOP\_NFS3\_OPTS | HDFS\_NFS3\_OPTS | | HADOOP\_NFS3\_SECURE\_EXTRA\_OPTS | HDFS\_NFS3\_SECURE\_EXTRA\_OPTS | | HADOOP\_PORTMAP\_OPTS | HDFS\_PORTMAP\_OPTS | | HADOOP\_SECONDARYNAMENODE\_OPTS | HDFS\_SECONDARYNAMENODE\_OPTS | | HADOOP\_ZKFC\_OPTS | HDFS\_ZKFC\_OPTS | > Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS > -- > > Key: HADOOP-13341 > URL: https://issues.apache.org/jira/browse/HADOOP-13341 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > > Big features like YARN-2928 demonstrate that even senior level Hadoop > developers forget that daemons need a custom _OPTS env var. We can replace > all of the custom vars with generic handling just like we do for the username > check. > For example, with generic handling in place: > || Old Var || New Var || > | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS | > | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS | > | n/a | YARN_TIMELINEREADER_OPTS | > | n/a | HADOOP_DISTCP_OPTS | > | n/a | MAPRED_DISTCP_OPTS | > | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS | > | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS | > | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS | > This makes it: > a) consistent across the entire project > b) consistent for every subcommand > c) eliminates almost all of the custom appending in the case statements > It's worth pointing out that subcommands like distcp that sometimes need a > higher than normal client-side heapsize or custom options are a huge win. > Combined with .hadooprc and/or dynamic subcommands, it means users can easily > do customizations based upon their needs without a lot of weirdo shell > aliasing or one line shell scripts off to the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452457#comment-15452457 ] Alejandro Abdelnur commented on HADOOP-13558: - [~lmccay], [~ste...@apache.org], thanks for looking into this. [~xiaochen], thanks for putting up a patch. Regarding the new constructor, do we really need it? or we could retrofit the existing one (which is package private). > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket > -- > > Key: HADOOP-13558 > URL: https://issues.apache.org/jira/browse/HADOOP-13558 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur > Attachments: HADOOP-13558.01.patch > > > The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions > and if they are met it invokes the {{reloginFromKeytab()}}. The > {{reloginFromKeytab()}} method then fails with an {{IOException}} > "loginUserFromKeyTab must be done first" because there is no keytab > associated with the UGI. > The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab > ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one > it triggers a call to {{reloginFromKeytab()}}. The problem is that the > {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned > {{IOException}}. > The root of the problem seems to be when creating a UGI via the > {{UGI.loginUserFromSubject(Subject)}} method, this method uses the > {{UserGroupInformation(Subject)}} constructor, and this constructor does the > following to determine if there is a keytab or not. > {code} > this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject); > {code} > If the {{Subject}} given had a keytab, then the UGI instance will have the > {{isKeytab}} set to TRUE. > It sets the UGI instance as it would have a keytab because the Subject has a > keytab. This has 2 problems: > First, it does not set the keytab file (and this, having the {{isKeytab}} set > to TRUE and the {{keytabFile}} set to NULL) is what triggers the > {{IOException}} in the method {{reloginFromKeytab()}}. > Second (and even if the first problem is fixed, this still is a problem), it > assumes that because the subject has a keytab it is up to UGI to do the > relogin using the keytab. This is incorrect if the UGI was created using the > {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the > Subject is not the UGI, but the caller, so the caller is responsible for > renewing the Kerberos tickets and the UGI should not try to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13360) Documentation for HADOOP_subcommand_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13360: -- Attachment: HADOOP-13360-HADOOP-13341.00.patch > Documentation for HADOOP_subcommand_OPTS > > > Key: HADOOP-13360 > URL: https://issues.apache.org/jira/browse/HADOOP-13360 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13360-HADOOP-13341.00.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13360) Documentation for HADOOP_subcommand_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13360: -- Status: Patch Available (was: Open) > Documentation for HADOOP_subcommand_OPTS > > > Key: HADOOP-13360 > URL: https://issues.apache.org/jira/browse/HADOOP-13360 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13360-HADOOP-13341.00.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11223) Offer a read-only conf alternative to new Configuration()
[ https://issues.apache.org/jira/browse/HADOOP-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452379#comment-15452379 ] Jason Lowe commented on HADOOP-11223: - I recently ran across this same issue with a user complaining about unnecessarily slow localization. It takes us at least 3 seconds to localize anything because that's the minimum startup time of the localizer program. Some of this slowness is from loading almost 4000 classes (!!) during localization and needing to lookup every known FileSystem before any FileSystem can be instantiated, but the largest problem is redundant processing in Configuration. The problem with copying a cached configuration object is the add default resource behavior. That invalidates every configuration object and causes each of them to reload the XML files _separately_. So if we have a cached object that never gets directly used but only has copies, we'll always copy a conf that hasn't loaded the XML and each copy will then load the XML the first time a property is accessed in them. Given that addDefaultResource is updating every instantiated conf object whether people like it or not, I don't see why we can't reuse a conf object for the straightforward cases like static code blocks, record readers, codec factories, etc. etc.. We can wrap the singleton instance with a subclass that throws UnsupportedOperationException on set methods. As for addDefaultResource IMHO we can block it or not, since even if we block it on the singleton other conf instances that have this called will update the others. And that's the way it used to work, so arguably we're preserving the old behavior by allowing addDefaultResource to update the shared singleton instance. The only downside I see once we lock down the set methods is that we theoretically can break some use-case where the XML files were updated on disk and the code really needed to see the fresh copy. However I don't think that will be the case for the instances within the Hadoop core code that will be updated to use this new singleton instance. Not sure if there are other scenarios that could break where one plays games with the classloader to switch which XML files will be seen. I'm assuming that if classloading games are being played then we're going to get multiple Configuration classes loaded each with their own singleton instance. Bonus points if we can eliminate the requirement that each conf object has to separately load all resources after addDefaultResource. It'd be nice if one conf object does the load and we poke it into the others like we poke the others when addDefaultResource was called. Also would be nice if we only loaded the newly added resource rather than reloading all resources. However that's really orthogonal to the shared, read-only instance idea and more appropriate for a separate JIRA. > Offer a read-only conf alternative to new Configuration() > - > > Key: HADOOP-11223 > URL: https://issues.apache.org/jira/browse/HADOOP-11223 > Project: Hadoop Common > Issue Type: Bug > Components: conf >Reporter: Gopal V >Assignee: Varun Saxena > Labels: Performance > Attachments: HADOOP-11223.001.patch > > > new Configuration() is called from several static blocks across Hadoop. > This is incredibly inefficient, since each one of those involves primarily > XML parsing at a point where the JIT won't be triggered & interpreter mode is > essentially forced on the JVM. > The alternate solution would be to offer a {{Configuration::getDefault()}} > alternative which disallows any modifications. > At the very least, such a method would need to be called from > # org.apache.hadoop.io.nativeio.NativeIO::() > # org.apache.hadoop.security.SecurityUtil::() > # org.apache.hadoop.yarn.factory.providers.RecordFactoryProvider:: -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452374#comment-15452374 ] Chris Nauroth commented on HADOOP-13556: [~lmccay], if I remember correctly, that won't be sufficient to re-trigger pre-commit, because it is aware that the timestamp hasn't changed on the attachment. I think it works if you attach a new file with the same contents. However, an even easier option for committers is to login to Jenkins and submit it manually. https://builds.apache.org/job/PreCommit-HADOOP-Build/ Login with your standard Apache credentials. Then, click Build with Parameters. Then, enter the JIRA ID (just the number, not the "HADOOP-" prefix.) I just submitted one for you here: https://builds.apache.org/job/PreCommit-HADOOP-Build/10426/ I think as a committer you have all the necessary rights to do this yourself too. There are different jobs for each JIRA project (PreCommit-HDFS-Build, PreCommit-YARN-Build, etc.), so it needs to be submitted under the job that matches the JIRA project ID. > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter: Larry McCay >Assignee: Larry McCay > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch > > > Current implementation of getPropsWithPrefix uses the > Configuration.iterator() method. This method is not threadsafe. > This patch will reimplement the gathering of properties that begin with a > prefix by using the safe getProps() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13568) S3AFastOutputStream to implement flush()
Steve Loughran created HADOOP-13568: --- Summary: S3AFastOutputStream to implement flush() Key: HADOOP-13568 URL: https://issues.apache.org/jira/browse/HADOOP-13568 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Steve Loughran Priority: Minor {{S3AFastOutputStream}} doesn't implement {{flush()}}, so it's a no-op. Really it should trigger a multipart upload of the current buffer. Note that simply calling {{uploadBuffer()} isn't enough...do that and things fail. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13567) S3AFileSystem to override getStoragetStatistics() and so serve up its statistics
Steve Loughran created HADOOP-13567: --- Summary: S3AFileSystem to override getStoragetStatistics() and so serve up its statistics Key: HADOOP-13567 URL: https://issues.apache.org/jira/browse/HADOOP-13567 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Reporter: Steve Loughran Assignee: Steve Loughran Priority: Minor Although S3AFileSystem collects lots of statistics, these aren't available programatically as {{getStoragetStatistics() }} isn't overridden. It must be overridden and serve up the local FS stats. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13566) NPE in S3AFastOutputStream.write
[ https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13566: Priority: Minor (was: Major) > NPE in S3AFastOutputStream.write > > > Key: HADOOP-13566 > URL: https://issues.apache.org/jira/browse/HADOOP-13566 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > During scale tests, managed to create an NPE > {code} > test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate) > Time elapsed: 2.258 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) > at java.io.DataOutputStream.write(DataOutputStream.java:107) > at java.io.FilterOutputStream.write(FilterOutputStream.java:97) > at > org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132) > {code} > trace implies that {{buffer == null}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13566) NPE in S3AFastOutputStream.write
[ https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452171#comment-15452171 ] Steve Loughran commented on HADOOP-13566: - Patch will convert this to a meaningful error; also added to the slow output stream {code} test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate) Time elapsed: 1.97 sec <<< ERROR! java.io.IOException: Filesystem closed at org.apache.hadoop.fs.s3a.S3AFastOutputStream.checkOpen(S3AFastOutputStream.java:162) at org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:194) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132) {code} > NPE in S3AFastOutputStream.write > > > Key: HADOOP-13566 > URL: https://issues.apache.org/jira/browse/HADOOP-13566 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > > During scale tests, managed to create an NPE > {code} > test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate) > Time elapsed: 2.258 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) > at java.io.DataOutputStream.write(DataOutputStream.java:107) > at java.io.FilterOutputStream.write(FilterOutputStream.java:97) > at > org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132) > {code} > trace implies that {{buffer == null}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13566) NPE in S3AFastOutputStream.write
[ https://issues.apache.org/jira/browse/HADOOP-13566?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452160#comment-15452160 ] Steve Loughran commented on HADOOP-13566: - Cause is attempting to write if the file is closed. Need to add a check there > NPE in S3AFastOutputStream.write > > > Key: HADOOP-13566 > URL: https://issues.apache.org/jira/browse/HADOOP-13566 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > > During scale tests, managed to create an NPE > {code} > test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate) > Time elapsed: 2.258 sec <<< ERROR! > java.lang.NullPointerException: null > at > org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) > at java.io.DataOutputStream.write(DataOutputStream.java:107) > at java.io.FilterOutputStream.write(FilterOutputStream.java:97) > at > org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132) > {code} > trace implies that {{buffer == null}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13566) NPE in S3AFastOutputStream.write
Steve Loughran created HADOOP-13566: --- Summary: NPE in S3AFastOutputStream.write Key: HADOOP-13566 URL: https://issues.apache.org/jira/browse/HADOOP-13566 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 2.7.3 Reporter: Steve Loughran Assignee: Steve Loughran During scale tests, managed to create an NPE {code} test_001_CreateHugeFile(org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate) Time elapsed: 2.258 sec <<< ERROR! java.lang.NullPointerException: null at org.apache.hadoop.fs.s3a.S3AFastOutputStream.write(S3AFastOutputStream.java:191) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58) at java.io.DataOutputStream.write(DataOutputStream.java:107) at java.io.FilterOutputStream.write(FilterOutputStream.java:97) at org.apache.hadoop.fs.s3a.scale.ITestS3AHugeFileCreate.test_001_CreateHugeFile(ITestS3AHugeFileCreate.java:132) {code} trace implies that {{buffer == null}} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15452079#comment-15452079 ] Larry McCay commented on HADOOP-13558: -- Hi [~tucu00]! I agree with [~ste...@apache.org]'s point regarding the double instantiation of the realUser. In fact, it seems to me that the setting of the loginContext and authenticationMethod get lost when you create the second one. This really looks like a bug, if not, it needs to be made clear as to why it is doing that. Beyond that, the new ctor should call the existing one and override the setting of isKeytab afterward in order to pick up any changes to the behavior of the other ctor going forward. Tricking UGI into not renewing/reloggin by overriding this field is simple but really should be done in an OO way instead. UGI being what it is, this simple change is probably the safest way forward. Let's comment this clearly in the new ctor - so when someone is stepping through and see the keytab credential they don't see this as a bug. > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket > -- > > Key: HADOOP-13558 > URL: https://issues.apache.org/jira/browse/HADOOP-13558 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur > Attachments: HADOOP-13558.01.patch > > > The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions > and if they are met it invokes the {{reloginFromKeytab()}}. The > {{reloginFromKeytab()}} method then fails with an {{IOException}} > "loginUserFromKeyTab must be done first" because there is no keytab > associated with the UGI. > The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab > ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one > it triggers a call to {{reloginFromKeytab()}}. The problem is that the > {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned > {{IOException}}. > The root of the problem seems to be when creating a UGI via the > {{UGI.loginUserFromSubject(Subject)}} method, this method uses the > {{UserGroupInformation(Subject)}} constructor, and this constructor does the > following to determine if there is a keytab or not. > {code} > this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject); > {code} > If the {{Subject}} given had a keytab, then the UGI instance will have the > {{isKeytab}} set to TRUE. > It sets the UGI instance as it would have a keytab because the Subject has a > keytab. This has 2 problems: > First, it does not set the keytab file (and this, having the {{isKeytab}} set > to TRUE and the {{keytabFile}} set to NULL) is what triggers the > {{IOException}} in the method {{reloginFromKeytab()}}. > Second (and even if the first problem is fixed, this still is a problem), it > assumes that because the subject has a keytab it is up to UGI to do the > relogin using the keytab. This is incorrect if the UGI was created using the > {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the > Subject is not the UGI, but the caller, so the caller is responsible for > renewing the Kerberos tickets and the UGI should not try to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-13556: - Status: Patch Available (was: Open) Trying to trigger a precommit jenkins job by resubmitting the patch. > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter: Larry McCay >Assignee: Larry McCay > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch > > > Current implementation of getPropsWithPrefix uses the > Configuration.iterator() method. This method is not threadsafe. > This patch will reimplement the gathering of properties that begin with a > prefix by using the safe getProps() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13556) Change Configuration.getPropsWithPrefix to use getProps instead of iterator
[ https://issues.apache.org/jira/browse/HADOOP-13556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Larry McCay updated HADOOP-13556: - Status: Open (was: Patch Available) > Change Configuration.getPropsWithPrefix to use getProps instead of iterator > --- > > Key: HADOOP-13556 > URL: https://issues.apache.org/jira/browse/HADOOP-13556 > Project: Hadoop Common > Issue Type: Bug >Reporter: Larry McCay >Assignee: Larry McCay > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HADOOP-13556-001.patch, HADOOP-13556-002.patch > > > Current implementation of getPropsWithPrefix uses the > Configuration.iterator() method. This method is not threadsafe. > This patch will reimplement the gathering of properties that begin with a > prefix by using the safe getProps() method. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451663#comment-15451663 ] Steve Loughran commented on HADOOP-13558: - I deny all understanding of how Kerberos works, except for a deep fear of the UGI class. Maybe [~lmc...@apache.org] could look at it from the perspective of somebody who understands this stuff # I don'tt understand why way the current/new code reuses the {{realuser}} field. Unless the act of creating the UGI has side effects, the first assignment is a no-op. If it is done for side effects, comments should declare this, an separate field used. now is the time to do this. # test-wise, before the patch to UGI goes in —did the new test case fail? As that's a good sign that the test can recreate the problem and show it is fixed. # If possible, it'd be good to extend KDiag with more info here # This really ought to go through a full build and test run against a kerberized cluster. For example: can a version of Hadoop built with this auth with HDFS using keytabs as well as tickets. Volunteers? > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket > -- > > Key: HADOOP-13558 > URL: https://issues.apache.org/jira/browse/HADOOP-13558 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur > Attachments: HADOOP-13558.01.patch > > > The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions > and if they are met it invokes the {{reloginFromKeytab()}}. The > {{reloginFromKeytab()}} method then fails with an {{IOException}} > "loginUserFromKeyTab must be done first" because there is no keytab > associated with the UGI. > The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab > ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one > it triggers a call to {{reloginFromKeytab()}}. The problem is that the > {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned > {{IOException}}. > The root of the problem seems to be when creating a UGI via the > {{UGI.loginUserFromSubject(Subject)}} method, this method uses the > {{UserGroupInformation(Subject)}} constructor, and this constructor does the > following to determine if there is a keytab or not. > {code} > this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject); > {code} > If the {{Subject}} given had a keytab, then the UGI instance will have the > {{isKeytab}} set to TRUE. > It sets the UGI instance as it would have a keytab because the Subject has a > keytab. This has 2 problems: > First, it does not set the keytab file (and this, having the {{isKeytab}} set > to TRUE and the {{keytabFile}} set to NULL) is what triggers the > {{IOException}} in the method {{reloginFromKeytab()}}. > Second (and even if the first problem is fixed, this still is a problem), it > assumes that because the subject has a keytab it is up to UGI to do the > relogin using the keytab. This is incorrect if the UGI was created using the > {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the > Subject is not the UGI, but the caller, so the caller is responsible for > renewing the Kerberos tickets and the UGI should not try to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451387#comment-15451387 ] Hadoop QA commented on HADOOP-13558: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 148 unchanged - 0 fixed = 149 total (was 148) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.ssl.TestSSLFactory | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13558 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826335/HADOOP-13558.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux d65236200199 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 20ae1fa | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10425/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10425/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10425/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10425/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket >
[jira] [Commented] (HADOOP-13365) Convert _OPTS to arrays
[ https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451330#comment-15451330 ] Hadoop QA commented on HADOOP-13365: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} HADOOP-13341 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 30s{color} | {color:green} HADOOP-13341 passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 12s{color} | {color:red} The patch generated 19 new + 76 unchanged - 0 fixed = 95 total (was 76) {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 8s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 0s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 48s{color} | {color:green} hadoop-yarn in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13365 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12826334/HADOOP-13365-HADOOP-13341.00.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux a5ea1c861714 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-13341 / af9f0e7 | | shellcheck | v0.4.4 | | shellcheck | https://builds.apache.org/job/PreCommit-HADOOP-Build/10424/artifact/patchprocess/diff-patch-shellcheck.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/10424/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10424/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10424/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Convert _OPTS to arrays > --- > > Key: HADOOP-13365 > URL: https://issues.apache.org/jira/browse/HADOOP-13365 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13365-HADOOP-13341.00.patch > > > While we are mucking with all of the _OPTS variables, this is a good time to > convert them to arrays so that filesystems with spaces in them can be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13365) Convert _OPTS to arrays
[ https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451270#comment-15451270 ] Allen Wittenauer edited comment on HADOOP-13365 at 8/31/16 6:22 AM: -00: * first pass There's a lot happening here, so let's go through it: * adding some helper routines in hadoop-functions to: ** convert strings to arrays if the array doesn't already exist ** add to arrays based upon a key to dedupe * convert almost all internal users of HADOOP_OPTS and xyz_OPTS to use the array form * update some pre-existing doc references * add several unit tests * rewrite existing unit tests to use the array form To do: * more/better docs * figure out what to do about catalina? * get HADOOP-13341 committed, since this code is several times larger without it * more testing was (Author: aw): -00: * first pass > Convert _OPTS to arrays > --- > > Key: HADOOP-13365 > URL: https://issues.apache.org/jira/browse/HADOOP-13365 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13365-HADOOP-13341.00.patch > > > While we are mucking with all of the _OPTS variables, this is a good time to > convert them to arrays so that filesystems with spaces in them can be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13558: --- Status: Patch Available (was: Open) > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket > -- > > Key: HADOOP-13558 > URL: https://issues.apache.org/jira/browse/HADOOP-13558 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.6.4, 2.7.2, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur > Attachments: HADOOP-13558.01.patch > > > The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions > and if they are met it invokes the {{reloginFromKeytab()}}. The > {{reloginFromKeytab()}} method then fails with an {{IOException}} > "loginUserFromKeyTab must be done first" because there is no keytab > associated with the UGI. > The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab > ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one > it triggers a call to {{reloginFromKeytab()}}. The problem is that the > {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned > {{IOException}}. > The root of the problem seems to be when creating a UGI via the > {{UGI.loginUserFromSubject(Subject)}} method, this method uses the > {{UserGroupInformation(Subject)}} constructor, and this constructor does the > following to determine if there is a keytab or not. > {code} > this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject); > {code} > If the {{Subject}} given had a keytab, then the UGI instance will have the > {{isKeytab}} set to TRUE. > It sets the UGI instance as it would have a keytab because the Subject has a > keytab. This has 2 problems: > First, it does not set the keytab file (and this, having the {{isKeytab}} set > to TRUE and the {{keytabFile}} set to NULL) is what triggers the > {{IOException}} in the method {{reloginFromKeytab()}}. > Second (and even if the first problem is fixed, this still is a problem), it > assumes that because the subject has a keytab it is up to UGI to do the > relogin using the keytab. This is incorrect if the UGI was created using the > {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the > Subject is not the UGI, but the caller, so the caller is responsible for > renewing the Kerberos tickets and the UGI should not try to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13558) UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
[ https://issues.apache.org/jira/browse/HADOOP-13558?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13558: --- Attachment: HADOOP-13558.01.patch Thanks again Tucu. I think this makes sense for the case described, but I lack the knowledge to review {{isKeytab == false}} is expected in all other cases. I'm attaching a patch which minimally set this to false when {{loginUserFromSubject}}, to trigger a pre-commit. (I imagine coverage on these cases aren't great though...) I'm not sure what's the expected return for the helper functions too, such as {{hasKerberosCredentials}}, {{isFromKeytab}} and {{isLoginKeytabBased}}. Could you review? Also would love to get comments from [~daryn] and [~ste...@apache.org]. Thanks in advance. > UserGroupInformation created from a Subject incorrectly tries to renew the > Kerberos ticket > -- > > Key: HADOOP-13558 > URL: https://issues.apache.org/jira/browse/HADOOP-13558 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.7.2, 2.6.4, 3.0.0-alpha2 >Reporter: Alejandro Abdelnur > Attachments: HADOOP-13558.01.patch > > > The UGI {{checkTGTAndReloginFromKeytab()}} method checks certain conditions > and if they are met it invokes the {{reloginFromKeytab()}}. The > {{reloginFromKeytab()}} method then fails with an {{IOException}} > "loginUserFromKeyTab must be done first" because there is no keytab > associated with the UGI. > The {{checkTGTAndReloginFromKeytab()}} method checks if there is a keytab > ({{isKeytab}} UGI instance variable) associated with the UGI, if there is one > it triggers a call to {{reloginFromKeytab()}}. The problem is that the > {{keytabFile}} UGI instance variable is NULL, and that triggers the mentioned > {{IOException}}. > The root of the problem seems to be when creating a UGI via the > {{UGI.loginUserFromSubject(Subject)}} method, this method uses the > {{UserGroupInformation(Subject)}} constructor, and this constructor does the > following to determine if there is a keytab or not. > {code} > this.isKeytab = KerberosUtil.hasKerberosKeyTab(subject); > {code} > If the {{Subject}} given had a keytab, then the UGI instance will have the > {{isKeytab}} set to TRUE. > It sets the UGI instance as it would have a keytab because the Subject has a > keytab. This has 2 problems: > First, it does not set the keytab file (and this, having the {{isKeytab}} set > to TRUE and the {{keytabFile}} set to NULL) is what triggers the > {{IOException}} in the method {{reloginFromKeytab()}}. > Second (and even if the first problem is fixed, this still is a problem), it > assumes that because the subject has a keytab it is up to UGI to do the > relogin using the keytab. This is incorrect if the UGI was created using the > {{UGI.loginUserFromSubject(Subject)}} method. In such case, the owner of the > Subject is not the UGI, but the caller, so the caller is responsible for > renewing the Kerberos tickets and the UGI should not try to do so. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13341) Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS
[ https://issues.apache.org/jira/browse/HADOOP-13341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15451277#comment-15451277 ] Allen Wittenauer commented on HADOOP-13341: --- FWIW: this is basically down to documentation now. yay! > Deprecate HADOOP_SERVERNAME_OPTS; replace with (command)_(subcommand)_OPTS > -- > > Key: HADOOP-13341 > URL: https://issues.apache.org/jira/browse/HADOOP-13341 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > > Big features like YARN-2928 demonstrate that even senior level Hadoop > developers forget that daemons need a custom _OPTS env var. We can replace > all of the custom vars with generic handling just like we do for the username > check. > For example, with generic handling in place: > || Old Var || New Var || > | HADOOP_NAMENODE_OPTS | HDFS_NAMENODE_OPTS | > | YARN_RESOURCEMANAGER_OPTS | YARN_RESOURCEMANAGER_OPTS | > | n/a | YARN_TIMELINEREADER_OPTS | > | n/a | HADOOP_DISTCP_OPTS | > | n/a | MAPRED_DISTCP_OPTS | > | HADOOP_DN_SECURE_EXTRA_OPTS | HDFS_DATANODE_SECURE_EXTRA_OPTS | > | HADOOP_NFS3_SECURE_EXTRA_OPTS | HDFS_NFS3_SECURE_EXTRA_OPTS | > | HADOOP_JOB_HISTORYSERVER_OPTS | MAPRED_HISTORYSERVER_OPTS | > This makes it: > a) consistent across the entire project > b) consistent for every subcommand > c) eliminates almost all of the custom appending in the case statements > It's worth pointing out that subcommands like distcp that sometimes need a > higher than normal client-side heapsize or custom options are a huge win. > Combined with .hadooprc and/or dynamic subcommands, it means users can easily > do customizations based upon their needs without a lot of weirdo shell > aliasing or one line shell scripts off to the side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13563) hadoop_subcommand_opts should print name not actual content during debug
[ https://issues.apache.org/jira/browse/HADOOP-13563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13563: -- Resolution: Fixed Fix Version/s: HADOOP-13341 Status: Resolved (was: Patch Available) > hadoop_subcommand_opts should print name not actual content during debug > > > Key: HADOOP-13563 > URL: https://issues.apache.org/jira/browse/HADOOP-13563 > Project: Hadoop Common > Issue Type: Sub-task > Components: scripts >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Fix For: HADOOP-13341 > > Attachments: HADOOP-13563-HADOOP-13341.00.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13365) Convert _OPTS to arrays
[ https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13365: -- Status: Patch Available (was: Open) > Convert _OPTS to arrays > --- > > Key: HADOOP-13365 > URL: https://issues.apache.org/jira/browse/HADOOP-13365 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13365-HADOOP-13341.00.patch > > > While we are mucking with all of the _OPTS variables, this is a good time to > convert them to arrays so that filesystems with spaces in them can be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13365) Convert _OPTS to arrays
[ https://issues.apache.org/jira/browse/HADOOP-13365?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-13365: -- Attachment: HADOOP-13365-HADOOP-13341.00.patch -00: * first pass > Convert _OPTS to arrays > --- > > Key: HADOOP-13365 > URL: https://issues.apache.org/jira/browse/HADOOP-13365 > Project: Hadoop Common > Issue Type: Improvement > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer > Attachments: HADOOP-13365-HADOOP-13341.00.patch > > > While we are mucking with all of the _OPTS variables, this is a good time to > convert them to arrays so that filesystems with spaces in them can be used. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org