[jira] [Updated] (HADOOP-13707) If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed
[ https://issues.apache.org/jira/browse/HADOOP-13707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Yang updated HADOOP-13707: --- Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.9.0 2.8.0 Target Version/s: 2.8.0, 2.9.0, 3.0.0-alpha2 Status: Resolved (was: Patch Available) This is a intermediate step required to provide ability to expand security options for securing web interface. Look forward to HADOOP-13119. I just committed this. Thank you, Yuanbo. > If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot > be accessed > - > > Key: HADOOP-13707 > URL: https://issues.apache.org/jira/browse/HADOOP-13707 > Project: Hadoop Common > Issue Type: Bug >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu > Labels: security > Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-13707.001.patch, HADOOP-13707.002.patch, > HADOOP-13707.003.patch, HADOOP-13707.004.patch > > > In {{HttpServer2#hasAdministratorAccess}}, it uses > `hadoop.security.authorization` to detect whether HTTP is authenticated. > It's not correct, because enabling Kerberos and HTTP SPNEGO are two steps. If > Kerberos is enabled while HTTP SPNEGO is not, some links cannot be accessed, > such as "/logs", and it will return error message as below: > {quote} > HTTP ERROR 403 > Problem accessing /logs/. Reason: > User dr.who is unauthorized to access this page. > {quote} > We should make sure {{HttpServletRequest#getAuthType}} is not null before we > invoke {{HttpServer2#hasAdministratorAccess}}. > {{getAuthType}} means to get the authorization scheme of this request -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577177#comment-15577177 ] Ding Fei commented on HADOOP-13724: --- Patch updated! > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch, HADOOP-13708-5.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577169#comment-15577169 ] Hadoop QA commented on HADOOP-13724: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13724 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833482/HADOOP-13708-5.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 4d242b110fef 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 391ce53 | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-archives U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10802/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch, HADOOP-13708-5.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ding Fei updated HADOOP-13724: -- Attachment: HADOOP-13708-5.patch > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch, HADOOP-13708-5.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577066#comment-15577066 ] Xiao Chen commented on HADOOP-13693: Test failures look unrelated. I plan to commit this on Tuesday PDT if no objections. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577050#comment-15577050 ] Hadoop QA commented on HADOOP-13032: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 13 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 2m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 8m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 14s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s{color} | {color:green} root generated 0 new + 694 unchanged - 8 fixed = 694 total (was 702) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 51s{color} | {color:orange} root: The patch generated 79 new + 1558 unchanged - 41 fixed = 1637 total (was 1599) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 5m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 3m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 10m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 48s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 34s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 32s{color} | {color:red} hadoop-mapreduce-client-core in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 54s{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s{color} | {color:red} hadoop-openstack in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 26s{color} | {color:red} hadoop-aws in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 20s{color} | {color:red} hadoop-azure in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 22s{color} | {color:red} hadoop-aliyun in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 21s{color} | {color:red} hadoop-azure-datalake in the patch failed.
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577034#comment-15577034 ] Hadoop QA commented on HADOOP-13693: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} hadoop-hdfs-project: The patch generated 0 new + 94 unchanged - 3 fixed = 94 total (was 97) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 28s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 92m 27s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport | | | hadoop.hdfs.web.TestWebHDFS | | | hadoop.hdfs.server.namenode.TestDiskspaceQuotaUpdate | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13693 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833471/HDFS-11009.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0d2e6fc90b55 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76cc84e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10798/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10798/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client
[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9
[ https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577031#comment-15577031 ] Hadoop QA commented on HADOOP-10075: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 75 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 13m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 7m 44s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client hadoop-mapreduce-project {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 9m 25s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 8s{color} | {color:orange} root: The patch generated 56 new + 2528 unchanged - 43 fixed = 2584 total (was 2571) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 15m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 8m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 582 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 19s{color} | {color:red} The patch 4277 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 28s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-client hadoop-mapreduce-project {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 20s{color} | {color:red} hadoop-maven-plugins generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 11m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 13s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 8s{color} | {color:green} hadoop-auth in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s{color} | {color:green} hadoop-auth-examples in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 27s{color} | {color:green}
[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.
[ https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577027#comment-15577027 ] Hadoop QA commented on HADOOP-8065: --- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 13 new + 61 unchanged - 1 fixed = 74 total (was 62) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 2s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-8065 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12801507/HADOOP-8065-trunk_2016-04-29-4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 50d7a1f4c7ca 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 30bb197 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10801/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10801/testReport/ | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10801/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > distcp should have an option to compress data while copying. > > > Key: HADOOP-8065 > URL: https://issues.apache.org/jira/browse/HADOOP-8065 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 0.20.2 >Reporter: Suresh
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577020#comment-15577020 ] Andrew Wang commented on HADOOP-13693: -- +1 change looks good for 3.0. Let's wait a bit to commit though in case others have comments. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13722) Code cleanup -- ViewFileSystem and InodeTree
[ https://issues.apache.org/jira/browse/HADOOP-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15577015#comment-15577015 ] Andrew Wang commented on HADOOP-13722: -- I bet precommit bot is going to have a field day with the VFS code :) you can tell it predates checkstyle turned on in precommit. I see a few other small nits we could address: * Unnecessary "static" on ResultKind, I think we should also have a newline after the closing brace of this enum. * Erratic indentation still on the getTargetFileSystem overrides * TestViewFsConfig, indentation of the "new InodeTree" is off, maybe run the auto-formatter on this entire file. * ViewFileSystem#MountPoint, want to turn those slash comments into javadoc comments? > Code cleanup -- ViewFileSystem and InodeTree > > > Key: HADOOP-13722 > URL: https://issues.apache.org/jira/browse/HADOOP-13722 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13722.01.patch > > > ViewFileSystem is the FileSystem for viewfs:// and its uses InodeTree to > manage the mount points. These files being very old, don't quit adhere to the > styling and coding standards. Will do code cleanup of these files as part of > this jira. No new functionalities or tests will be added as part of this > jira. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents
[ https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576984#comment-15576984 ] Allen Wittenauer commented on HADOOP-13708: --- Two notes: * Once precommit picks up that github is being used, it will not go back to doing a patch download. * The PR likely started to fail once it got a bunch more commits in it. (Looking at it now, there are 17 commits in it!) Github PRs should be rebased and have it's own commits squashed into one for absolute best results. > Fix a few typos in site *.md documents > -- > > Key: HADOOP-13708 > URL: https://issues.apache.org/jira/browse/HADOOP-13708 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.0 >Reporter: Ding Fei >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-1.patch, HADOOP-13708-2.patch, > HADOOP-13708-3.patch, HADOOP-13708-4.patch, HADOOP-13708.patch > > > Fix several typos in site *.md documents. > Touched documents listed: > * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md > * > hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md > * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md > * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13722) Code cleanup -- ViewFileSystem and InodeTree
[ https://issues.apache.org/jira/browse/HADOOP-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576980#comment-15576980 ] Hadoop QA commented on HADOOP-13722: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 80 unchanged - 22 fixed = 80 total (was 102) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13722 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833473/HADOOP-13722.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c98f5df27f75 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76cc84e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10799/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10799/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Code cleanup -- ViewFileSystem and InodeTree > > > Key: HADOOP-13722 > URL: https://issues.apache.org/jira/browse/HADOOP-13722 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13722.01.patch > > > ViewFileSystem is the FileSystem for viewfs:// and
[jira] [Commented] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576973#comment-15576973 ] Allen Wittenauer commented on HADOOP-13724: --- bq. precommit bot is confused by the combination of PRs and patches As soon as github is invoked, it won't go back. Contributors need to pick one and stick with it. > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576969#comment-15576969 ] Hadoop QA commented on HADOOP-13693: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 41s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13693 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833475/HADOOP-13693.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 74244fa3276f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76cc84e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10800/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10800/testReport/ | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10800/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 >
[jira] [Commented] (HADOOP-8065) distcp should have an option to compress data while copying.
[ https://issues.apache.org/jira/browse/HADOOP-8065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576966#comment-15576966 ] Yongjun Zhang commented on HADOOP-8065: --- HI [~snayakm], Thanks for your work here and thanks [~raviprak] for the review so far. I quickly browsed the patch, and have a couple of comments: * {{mapreduce.output.fileoutputformat.compress}} and {{mapreduce.output.fileoutputformat.compress.codec}} are defined in FileOutputFormat.java, we should use the constant defined there at the multiple places this patch touched. * May I know what kind of tests you have done for the patch? Thanks. > distcp should have an option to compress data while copying. > > > Key: HADOOP-8065 > URL: https://issues.apache.org/jira/browse/HADOOP-8065 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 0.20.2 >Reporter: Suresh Antony >Assignee: Suraj Nayak >Priority: Minor > Labels: distcp > Fix For: 0.20.2 > > Attachments: HADOOP-8065-trunk_2015-11-03.patch, > HADOOP-8065-trunk_2015-11-04.patch, HADOOP-8065-trunk_2016-04-29-4.patch, > patch.distcp.2012-02-10 > > > We would like compress the data while transferring from our source system to > target system. One way to do this is to write a map/reduce job to compress > that after/before being transferred. This looks inefficient. > Since distcp already reading writing data it would be better if it can > accomplish while doing this. > Flip side of this is that distcp -update option can not check file size > before copying data. It can only check for the existence of file. > So I propose if -compress option is given then file size is not checked. > Also when we copy file appropriate extension needs to be added to file > depending on compression type. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13693: --- Attachment: HADOOP-13693.02.patch Oops, there you go... Thanks [~xyao] for the quick response! > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch, HADOOP-13693.02.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13693: --- Attachment: (was: HDFS-11009.02.patch) > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage
[ https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576900#comment-15576900 ] Jing Zhao commented on HADOOP-13546: Yes, I think we can backport it to 2.7 > Override equals and hashCode to avoid connection leakage > > > Key: HADOOP-13546 > URL: https://issues.apache.org/jira/browse/HADOOP-13546 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 2.8.0 > > Attachments: HADOOP-13546-HADOOP-13436.000.patch, > HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, > HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, > HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, > HADOOP-13546-HADOOP-13436.007.patch > > > Override #equals and #hashcode to ensure multiple instances are equivalent. > They eventually > share the same RPC connection given the other arguments of constructing > ConnectionId are > the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576879#comment-15576879 ] Xiaoyu Yao commented on HADOOP-13693: - Thanks [~xiaochen] working on this and [~andrew.wang] for the discussion. Remove UNAUTHENTICATED from audit log sounds reasonable to me. The 2nd patch attached seems not for this ticket thought. Can you update it? > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch, HDFS-11009.02.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13546) Override equals and hashCode to avoid connection leakage
[ https://issues.apache.org/jira/browse/HADOOP-13546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576877#comment-15576877 ] Xiaobing Zhou commented on HADOOP-13546: Ping [~jingzhao] for input for it. > Override equals and hashCode to avoid connection leakage > > > Key: HADOOP-13546 > URL: https://issues.apache.org/jira/browse/HADOOP-13546 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc >Affects Versions: 2.7.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Fix For: 2.8.0 > > Attachments: HADOOP-13546-HADOOP-13436.000.patch, > HADOOP-13546-HADOOP-13436.001.patch, HADOOP-13546-HADOOP-13436.002.patch, > HADOOP-13546-HADOOP-13436.003.patch, HADOOP-13546-HADOOP-13436.004.patch, > HADOOP-13546-HADOOP-13436.005.patch, HADOOP-13546-HADOOP-13436.006.patch, > HADOOP-13546-HADOOP-13436.007.patch > > > Override #equals and #hashcode to ensure multiple instances are equivalent. > They eventually > share the same RPC connection given the other arguments of constructing > ConnectionId are > the same. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13722) Code cleanup -- ViewFileSystem and InodeTree
[ https://issues.apache.org/jira/browse/HADOOP-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13722: Status: Patch Available (was: Open) [~andrew.wang], please take a look at the patch whenever you find time. > Code cleanup -- ViewFileSystem and InodeTree > > > Key: HADOOP-13722 > URL: https://issues.apache.org/jira/browse/HADOOP-13722 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13722.01.patch > > > ViewFileSystem is the FileSystem for viewfs:// and its uses InodeTree to > manage the mount points. These files being very old, don't quit adhere to the > styling and coding standards. Will do code cleanup of these files as part of > this jira. No new functionalities or tests will be added as part of this > jira. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13722) Code cleanup -- ViewFileSystem and InodeTree
[ https://issues.apache.org/jira/browse/HADOOP-13722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HADOOP-13722: Attachment: HADOOP-13722.01.patch Attaching v01 patch to address various code cleanliness issues in {{ViewFileSystem}} and {{InodeTree}} > Code cleanup -- ViewFileSystem and InodeTree > > > Key: HADOOP-13722 > URL: https://issues.apache.org/jira/browse/HADOOP-13722 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13722.01.patch > > > ViewFileSystem is the FileSystem for viewfs:// and its uses InodeTree to > manage the mount points. These files being very old, don't quit adhere to the > styling and coding standards. Will do code cleanup of these files as part of > this jira. No new functionalities or tests will be added as part of this > jira. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13693: --- Attachment: HDFS-11009.02.patch Thanks [~andrew.wang] for the comment! That makes sense too, since the that audit line isn't helpful in auditing KMS anyway... Attaching a patch 2 for this. Would love to hear [~asuresh] and [~xyao]'s options as well. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch, HDFS-11009.02.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13717) Shell scripts call hadoop_verify_logdir even when command is not started as daemon
[ https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576821#comment-15576821 ] Allen Wittenauer commented on HADOOP-13717: --- bq. Man, the balancer is a can of worms. Not a daemon, but runs awkwardly longer than other normal commands. This is the root of potential differences in expectations. Yup. Completely agree. :) bq. What do you propose we do? Minimally, we should make sure all the members of the balancer family have the same daemonization behavior, which is currently untrue. Yes, we probably should make sure they are treated the same in the hdfs script if they aren't. We should definitely avoid adding more sbin scripts. My hope is that in 4.x we can wipe out most of sbin and reduce our code footprint. bq. If part of the answer is that the balancer family are daemons and need a HADOOP_LOG_DIR and a log4j.properties, that's fine with me. Not a hard change on our side. I was thinking about what kind of interfaces/guarantees we provide 3rd parties. We make no promises about the content of log4j that I could find, so that's an easy one. But if a non-ASF jar gets added to the classpath via shellprofile what would the expectations on HADOOP_LOG_DIR and -Dhadoop.log.dir be? The key might be hadoop-env.sh: {code} # Where (primarily) daemon log files are stored. # ${HADOOP_HOME}/logs by default. # Java property: hadoop.log.dir # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs {code} It's pretty clear that HADOOP_LOG_DIR is expected to point somewhere valid when we run as a daemon. HADOOP_LOG_DIR needs to work then on daemons. That leads me to we basically have three choices: 1. If there is a general agreement amongst the community that balancer and friends should run with HADOOP_SUBCMD_SUPPORTDAEMONIZATION=true, then HADOOP_LOG_DIR should be set to something writable (e.g., /tmp) when it is being executed. 2. If balancer should be run with HADOOP_SUBCMD_SUPPORTDAEMONIZATION=false, it now becomes a normal client command and sbin/start-balancer goes away. HADOOP_LOG_DIR, etc, now become irrelevant. 3. Some third state needs to get introduced and all of the accompanying support code added so that we can support it in all of the user-executable scripts. At this point, yes, I think the easiest path forward really is #1: HADOOP_LOG_DIR must point somewhere writable. All of the other options have a lot more pain involved, for us and/or the end users. > Shell scripts call hadoop_verify_logdir even when command is not started as > daemon > -- > > Key: HADOOP-13717 > URL: https://issues.apache.org/jira/browse/HADOOP-13717 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang > > Issue found when working with the HDFS balancer. > In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the > "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which > specifies the log location isn't even used here, since the command is being > started in the foreground. > I think we can push the {{hadoop_verify_logdir}} call down into > {{hadoop_start_daemon_wrapper}} instead, which does use the outfile. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13702) Add a new instrumented read-write lock
[ https://issues.apache.org/jira/browse/HADOOP-13702?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576815#comment-15576815 ] Xiao Chen commented on HADOOP-13702: Thanks [~jingcheng...@intel.com] for revving. The separate readlock / writelock looks good. I feel the various code duplication should still be improved. IMHO we could just have an instrumentation class - what type of lock it's instrumenting should be separated. Also, if I read [~chris.douglas]'s comment correctly: bq. The HDFS InstrumentedLock class contains nearly identical functionality. If Common will introduce other instrumented locks, then this should replace what's in HDFS. The proposal was, if we put this in common, we should replace the existing {{InstrumentedLock}} in hdfs, instead of copy-pasting. So we don't have to 'align' with that class in HDFS. :) Would also like to hear what others think. > Add a new instrumented read-write lock > -- > > Key: HADOOP-13702 > URL: https://issues.apache.org/jira/browse/HADOOP-13702 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Jingcheng Du >Assignee: Jingcheng Du > Attachments: HADOOP-13702-V6.patch, HDFS-10924-2.patch, > HDFS-10924-3.patch, HDFS-10924-4.patch, HDFS-10924-5.patch, HDFS-10924.patch > > > Add a new instrumented read-write lock in hadoop common, so that the > HDFS-9668 can use this to improve the locking in FsDatasetImpl -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576773#comment-15576773 ] Andrew Wang commented on HADOOP-13693: -- I think since this OPTIONS call is unrelated to any actual KMS-level operation, it doesn't belong in the audit log. Especially since this UNAUTHENTICATED is part of the happy path of authenticating with the KMS. We can consider moving this information to kms.log instead, but it seems spammy even there. My 2c is to just remove it. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576733#comment-15576733 ] Hadoop QA commented on HADOOP-13709: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 55 unchanged - 0 fixed = 56 total (was 55) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 50s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13709 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833453/HADOOP-13709.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7fb245007915 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76cc84e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10796/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10796/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10796/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10796/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits >
[jira] [Commented] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576696#comment-15576696 ] Andrew Wang commented on HADOOP-13724: -- There's one tab caught by precommit, few more nits looking through the diff: * Typo "directores" -> "directories" * Typo "infrastrucutre" -> "infrastructure" * The Preconditions headers are still being removed, let's not do that That's it though, thanks for pushing on this [~danix800]. I'm otherwise +1. > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13693: --- Description: For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED ErrorMsg:'Authentication required' message before the OK messages. This is expected, and due to the spnego authentication sequence. (Notice method == {{OPTIONS}}) {noformat} 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt ErrorMsg:'Authentication required' 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=0ms] 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=10193ms] {noformat} However, admins/auditors see this and can easily get confused/alerted. We should make it obvious this is benign, and help them focus on the real errors. was: For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED ErrorMsg:'Authentication required' message before the OK messages. This is expected, and due to the spnego authentication sequence. {noformat} 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt ErrorMsg:'Authentication required' 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=0ms] 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=10193ms] {noformat} However, admins/auditors see this and can easily get confused/alerted. We should make it obvious this is benign, and help them focus on the real errors. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign, and help them focus on the real errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13693) Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly
[ https://issues.apache.org/jira/browse/HADOOP-13693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13693: --- Description: For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED ErrorMsg:'Authentication required' message before the OK messages. This is expected, and due to the spnego authentication sequence. (Notice method == {{OPTIONS}}) {noformat} 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt ErrorMsg:'Authentication required' 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=0ms] 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=10193ms] {noformat} However, admins/auditors see this and can easily get confused/alerted. We should make it obvious this is benign. was: For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED ErrorMsg:'Authentication required' message before the OK messages. This is expected, and due to the spnego authentication sequence. (Notice method == {{OPTIONS}}) {noformat} 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt ErrorMsg:'Authentication required' 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=0ms] 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, accessCount=1, interval=10193ms] {noformat} However, admins/auditors see this and can easily get confused/alerted. We should make it obvious this is benign, and help them focus on the real errors. > Make the SPNEGO initialization OPTIONS message in kms audit log admin-friendly > -- > > Key: HADOOP-13693 > URL: https://issues.apache.org/jira/browse/HADOOP-13693 > Project: Hadoop Common > Issue Type: Improvement > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Minor > Attachments: HADOOP-13693.01.patch > > > For a successful kms operation, kms-audit.log shows an UNAUTHENTICATED > ErrorMsg:'Authentication required' message before the OK messages. This is > expected, and due to the spnego authentication sequence. (Notice method == > {{OPTIONS}}) > {noformat} > 2016-01-31 21:07:04,671 UNAUTHENTICATED RemoteHost:10.0.2.15 Method:OPTIONS > URL:https://quickstart.cloudera:16000/kms/v1/keyversion/ZJfn4lfNXxy068gqEmhxRCFljzoKEKDDR9ZJLO32vqq/_eek?eek_op=decrypt > ErrorMsg:'Authentication required' > 2016-01-31 21:07:04,911 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=0ms] > 2016-01-31 21:07:15,104 OK[op=DECRYPT_EEK, key=cloudera, user=cloudera, > accessCount=1, interval=10193ms] > {noformat} > However, admins/auditors see this and can easily get confused/alerted. We > should make it obvious this is benign. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576636#comment-15576636 ] Hadoop QA commented on HADOOP-13724: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch 1 line(s) with tabs. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 11m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13724 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833455/HADOOP-13708-4.patch | | Optional Tests | asflicense mvnsite | | uname | Linux dc4f5a351bf3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 76cc84e | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/10795/artifact/patchprocess/whitespace-tabs.txt | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-archives U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10795/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13032) Refactor FileSystem$Statistics to use StorageStatistics
[ https://issues.apache.org/jira/browse/HADOOP-13032?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-13032: --- Attachment: HADOOP-13032.003.patch > Refactor FileSystem$Statistics to use StorageStatistics > --- > > Key: HADOOP-13032 > URL: https://issues.apache.org/jira/browse/HADOOP-13032 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Attachments: HADOOP-13032.000.patch, HADOOP-13032.001.patch, > HADOOP-13032.002.patch, HADOOP-13032.003.patch > > > [HADOOP-13065] added a new interface for retrieving FS and FC Statistics. > This jira is to track the effort of moving the {{Statistics}} class out of > {{FileSystem}}, and make it use that new interface. > We should keep the thread local implementation. Benefits are: > # they could be used in both {{FileContext}} and {{FileSystem}} > # unified stats data structure > # shorter source code > Please note this will be an backwards-incompatible change. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13708) Fix a few typos in site *.md documents
[ https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13708: - Resolution: Duplicate Target Version/s: 3.0.0-alpha1, 2.8.0 (was: 2.8.0, 3.0.0-alpha1) Status: Resolved (was: Patch Available) I put the v4 patch up at a new JIRA HADOOP-13724 to make things simple for the precommit bot, closing this one as dupe. > Fix a few typos in site *.md documents > -- > > Key: HADOOP-13708 > URL: https://issues.apache.org/jira/browse/HADOOP-13708 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.0 >Reporter: Ding Fei >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-1.patch, HADOOP-13708-2.patch, > HADOOP-13708-3.patch, HADOOP-13708-4.patch, HADOOP-13708.patch > > > Fix several typos in site *.md documents. > Touched documents listed: > * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md > * > hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md > * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md > * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13724: - Target Version/s: 3.0.0-alpha1, 2.8.0 (was: 2.8.0, 3.0.0-alpha1) Status: Patch Available (was: Open) > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 3.0.0-alpha1, 2.8.0 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13724) Fix a few typos in site markdown documents
Andrew Wang created HADOOP-13724: Summary: Fix a few typos in site markdown documents Key: HADOOP-13724 URL: https://issues.apache.org/jira/browse/HADOOP-13724 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 3.0.0-alpha1, 2.8.0 Reporter: Andrew Wang Assignee: Ding Fei Priority: Minor Attachments: HADOOP-13708-4.patch New JIRA for HADOOP-13708 since precommit bot is confused by the combination of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13724) Fix a few typos in site markdown documents
[ https://issues.apache.org/jira/browse/HADOOP-13724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13724: - Attachment: HADOOP-13708-4.patch Attaching [~danix800]'s patch from HADOOP-13708 to get a precommit run. > Fix a few typos in site markdown documents > -- > > Key: HADOOP-13724 > URL: https://issues.apache.org/jira/browse/HADOOP-13724 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.8.0, 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-4.patch > > > New JIRA for HADOOP-13708 since precommit bot is confused by the combination > of PRs and patches. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents
[ https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576588#comment-15576588 ] Andrew Wang commented on HADOOP-13708: -- Seems like the github PRs are confusing the precommit bot :( I triggered the build manually, hopefully it picks up patch v4. > Fix a few typos in site *.md documents > -- > > Key: HADOOP-13708 > URL: https://issues.apache.org/jira/browse/HADOOP-13708 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.0 >Reporter: Ding Fei >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-1.patch, HADOOP-13708-2.patch, > HADOOP-13708-3.patch, HADOOP-13708-4.patch, HADOOP-13708.patch > > > Fix several typos in site *.md documents. > Touched documents listed: > * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md > * > hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md > * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md > * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13708) Fix a few typos in site *.md documents
[ https://issues.apache.org/jira/browse/HADOOP-13708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576586#comment-15576586 ] Hadoop QA commented on HADOOP-13708: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HADOOP-13708 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HADOOP-13708 | | GITHUB PR | https://github.com/apache/hadoop/pull/140 | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10794/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fix a few typos in site *.md documents > -- > > Key: HADOOP-13708 > URL: https://issues.apache.org/jira/browse/HADOOP-13708 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Affects Versions: 2.8.0 >Reporter: Ding Fei >Assignee: Ding Fei >Priority: Minor > Attachments: HADOOP-13708-1.patch, HADOOP-13708-2.patch, > HADOOP-13708-3.patch, HADOOP-13708-4.patch, HADOOP-13708.patch > > > Fix several typos in site *.md documents. > Touched documents listed: > * hadoop-tools/hadoop-archives/src/site/markdown/HadoopArchives.md.vm > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/testing.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/notation.md > * > hadoop-common-project/hadoop-common/src/site/markdown/filesystem/introduction.md > * hadoop-common-project/hadoop-common/src/site/markdown/filesystem/model.md > * > hadoop-common-project/hadoop-common/src/site/markdown/InterfaceClassification.md > * hadoop-common-project/hadoop-common/src/site/markdown/ClusterSetup.md > * hadoop-common-project/hadoop-common/src/site/markdown/Compatibility.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13709) Clean up subprocesses spawned by Shell.java:runCommand when the shell process exits
[ https://issues.apache.org/jira/browse/HADOOP-13709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eric Badger updated HADOOP-13709: - Attachment: HADOOP-13709.002.patch Attaching new patch that uses a shutdown hook to kill the child process that was spawned by shell. I tested this on a local cluster on the localizer use case from YARN-5641. I replaced the {{untar}} process with a {{sleep 100}} process and confirmed that the {{sleep}} was killed immediately after the localizer. Before this patch, the localizer would shutdown without killing the {{sleep}} process. [~jlowe], [~daryn], please review > Clean up subprocesses spawned by Shell.java:runCommand when the shell process > exits > --- > > Key: HADOOP-13709 > URL: https://issues.apache.org/jira/browse/HADOOP-13709 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.2.0 >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HADOOP-13709.001.patch, HADOOP-13709.002.patch > > > The runCommand code in Shell.java can get into a situation where it will > ignore InterruptedExceptions and refuse to shutdown due to being in I/O > waiting for the return value of the subprocess that was spawned. We need to > allow for the subprocess to be interrupted and killed when the shell process > gets killed. Currently the JVM will shutdown and all of the subprocesses will > be orphaned and not killed. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13721) Remove stale method ViewFileSystem#getTrashCanLocation
[ https://issues.apache.org/jira/browse/HADOOP-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576519#comment-15576519 ] Hudson commented on HADOOP-13721: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10616 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10616/]) HADOOP-13721. Remove stale method ViewFileSystem#getTrashCanLocation. (wang: rev aee538be6c2ab324de4d7834cd3347959272de01) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/viewfs/ViewFileSystem.java > Remove stale method ViewFileSystem#getTrashCanLocation > -- > > Key: HADOOP-13721 > URL: https://issues.apache.org/jira/browse/HADOOP-13721 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13721.01.patch > > > {{ViewFileSystem}} which extends {{FileSystem}} has a public method > {{getTrashCanLocation}} which is neither overridden nor used by anybody. > Looks like it existed when the file was created, and also I see the > implementation returning homeDirectory which might not be the expected one in > cases of {{expunge}}. So, inclined to remove this stale and potentially > dangerous method unless anyone has any concerns. > {code} > public Path getTrashCanLocation(final Path f) throws FileNotFoundException { > final InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.isInternalDir() ? null : > res.targetFileSystem.getHomeDirectory(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576501#comment-15576501 ] Andrew Wang commented on HADOOP-13061: -- If it's fine with Kai, it's fine with me. One question (which we can address in a follow-on), do we need any doc or core-default.xml updates to go along with this change? If so, a release note would also be nice. > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > Labels: hdfs-ec-3.0-must-do > Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, > HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, > HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, > HADOOP-13061.09.patch, HADOOP-13061.10.patch, HADOOP-13061.11.patch, > HADOOP-13061.12.patch, HADOOP-13061.13.patch, HADOOP-13061.14.patch, > HADOOP-13061.15.patch, HADOOP-13061.16.patch, HADOOP-13061.17.patch, > HADOOP-13061.18.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13721) Remove stale method ViewFileSystem#getTrashCanLocation
[ https://issues.apache.org/jira/browse/HADOOP-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13721: - Component/s: viewfs > Remove stale method ViewFileSystem#getTrashCanLocation > -- > > Key: HADOOP-13721 > URL: https://issues.apache.org/jira/browse/HADOOP-13721 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13721.01.patch > > > {{ViewFileSystem}} which extends {{FileSystem}} has a public method > {{getTrashCanLocation}} which is neither overridden nor used by anybody. > Looks like it existed when the file was created, and also I see the > implementation returning homeDirectory which might not be the expected one in > cases of {{expunge}}. So, inclined to remove this stale and potentially > dangerous method unless anyone has any concerns. > {code} > public Path getTrashCanLocation(final Path f) throws FileNotFoundException { > final InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.isInternalDir() ? null : > res.targetFileSystem.getHomeDirectory(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13721) Remove stale method ViewFileSystem#getTrashCanLocation
[ https://issues.apache.org/jira/browse/HADOOP-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13721: - Resolution: Fixed Hadoop Flags: Incompatible change Fix Version/s: 3.0.0-alpha2 Release Note: The unused method getTrashCanLocation has been removed. This method has long been superceded by FileSystem#getTrashRoot. Status: Resolved (was: Patch Available) Committed to trunk. Manoj, do you want to do another JIRA to mark this method as @Deprecated in branch-2? > Remove stale method ViewFileSystem#getTrashCanLocation > -- > > Key: HADOOP-13721 > URL: https://issues.apache.org/jira/browse/HADOOP-13721 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-13721.01.patch > > > {{ViewFileSystem}} which extends {{FileSystem}} has a public method > {{getTrashCanLocation}} which is neither overridden nor used by anybody. > Looks like it existed when the file was created, and also I see the > implementation returning homeDirectory which might not be the expected one in > cases of {{expunge}}. So, inclined to remove this stale and potentially > dangerous method unless anyone has any concerns. > {code} > public Path getTrashCanLocation(final Path f) throws FileNotFoundException { > final InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.isInternalDir() ? null : > res.targetFileSystem.getHomeDirectory(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13721) Remove stale method ViewFileSystem#getTrashCanLocation
[ https://issues.apache.org/jira/browse/HADOOP-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576482#comment-15576482 ] Andrew Wang commented on HADOOP-13721: -- Gotcha. Looking at the Shell code, it resolves the child filesystem before creating the Trash, and calls getTrashRoot correctly. Emptying is handled server-side by each namenode. LGTM +1 will commit shortly. > Remove stale method ViewFileSystem#getTrashCanLocation > -- > > Key: HADOOP-13721 > URL: https://issues.apache.org/jira/browse/HADOOP-13721 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13721.01.patch > > > {{ViewFileSystem}} which extends {{FileSystem}} has a public method > {{getTrashCanLocation}} which is neither overridden nor used by anybody. > Looks like it existed when the file was created, and also I see the > implementation returning homeDirectory which might not be the expected one in > cases of {{expunge}}. So, inclined to remove this stale and potentially > dangerous method unless anyone has any concerns. > {code} > public Path getTrashCanLocation(final Path f) throws FileNotFoundException { > final InodeTree.ResolveResult res = > fsState.resolve(getUriPath(f), true); > return res.isInternalDir() ? null : > res.targetFileSystem.getHomeDirectory(); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13717) Shell scripts call hadoop_verify_logdir even when command is not started as daemon
[ https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576443#comment-15576443 ] Andrew Wang commented on HADOOP-13717: -- Man, the balancer is a can of worms. Not a daemon, but runs awkwardly longer than other normal commands. This is the root of potential differences in expectations. What do you propose we do? Minimally, we should make sure all the members of the balancer family have the same daemonization behavior, which is currently untrue. If part of the answer is that the balancer family are daemons and need a HADOOP_LOG_DIR and a log4j.properties, that's fine with me. Not a hard change on our side. > Shell scripts call hadoop_verify_logdir even when command is not started as > daemon > -- > > Key: HADOOP-13717 > URL: https://issues.apache.org/jira/browse/HADOOP-13717 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang > > Issue found when working with the HDFS balancer. > In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the > "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which > specifies the log location isn't even used here, since the command is being > started in the foreground. > I think we can push the {{hadoop_verify_logdir}} call down into > {{hadoop_start_daemon_wrapper}} instead, which does use the outfile. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13717) Shell scripts call hadoop_verify_logdir even when command is not started as daemon
[ https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576337#comment-15576337 ] Allen Wittenauer commented on HADOOP-13717: --- I think it's important to point out that this ... bq. For a bit more context, we have some code that starts the balancer in the foreground, without a log4j.properties file or setting $HADOOP_LOG_DIR. verify_logdir then checks the default location ($HADOOP_HOME/logs), which is not writable, and fails. ... is an edge case. Most users of hadoop almost certainly have a log4j.properties file and HADOOP_LOG_DIR is set somewhere writable at installation time. > Shell scripts call hadoop_verify_logdir even when command is not started as > daemon > -- > > Key: HADOOP-13717 > URL: https://issues.apache.org/jira/browse/HADOOP-13717 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang > > Issue found when working with the HDFS balancer. > In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the > "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which > specifies the log location isn't even used here, since the command is being > started in the foreground. > I think we can push the {{hadoop_verify_logdir}} call down into > {{hadoop_start_daemon_wrapper}} instead, which does use the outfile. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10075) Update jetty dependency to version 9
[ https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated HADOOP-10075: --- Attachment: HADOOP-10075.009.patch The 009 patch: - Adds a constant for the "charset=utf-8" string. It doesn't do this in {{TestTimelineReaderWebServicesHBaseStorage}} because that module uses Hadoop Common 2.5, which doesn't have this new constant. I also wasn't able to upload the patch to ReviewBoard. > Update jetty dependency to version 9 > > > Key: HADOOP-10075 > URL: https://issues.apache.org/jira/browse/HADOOP-10075 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.2.0, 2.6.0 >Reporter: Robert Rati >Assignee: Robert Kanter > Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, > HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, > HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.009.patch, > HADOOP-10075.patch > > > Jetty6 is no longer maintained. Update the dependency to jetty9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13717) Shell scripts call hadoop_verify_logdir even when command is not started as daemon
[ https://issues.apache.org/jira/browse/HADOOP-13717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576304#comment-15576304 ] Allen Wittenauer commented on HADOOP-13717: --- bq. it seems like if someone is not specifying the "--daemon" flag, then they don't care about daemon things like pid files and log dirs for stdout/stderr. They do, actually. The inconsistent behavior of the hadoop daemons was a big sticking point amongst quite a few admins I had talked to. bq. The audit log is an interesting case, but I think app-specific logging should be checked in the app, not the shell scripts (which are generic). I'd love to see pid and log dir handling out of the scripts. It greatly over-complicates them. One thing to keep in mind that doing it prior to Java launch means that we get extremely fast fail: there's no Java classpath work and no Java initialization costs. bq. Bigtop and CDH don't have balancer init scripts for instance. Sorry, I think I may have miscommunicated this point. start-balancer is geared towards manual usage but running it in the background and catching it's IO as it can run for very long times on large and/or extremely misbalanced clusters. It's not a daemon in the traditional sense. It really is a convience script so that those that aren't familiar with bash don't have to remember how to catch stdout/stderr, or use disown or whatever. I'd be very surprised if there actually was an init script. It's fun to note that the start-balancer script only appears to be documented in the Balancer javadoc and the only place that Javadoc is really exposed is on Cloudera's website. ;) bq. I think there should be some generic fix for when "--daemon" isn't specified, because of user expectations. As stated above, user expectation is consistency. No consistency will mean we'll also need to remove the --daemon status capability since it will be unreliable. > Shell scripts call hadoop_verify_logdir even when command is not started as > daemon > -- > > Key: HADOOP-13717 > URL: https://issues.apache.org/jira/browse/HADOOP-13717 > Project: Hadoop Common > Issue Type: Bug > Components: scripts >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang > > Issue found when working with the HDFS balancer. > In {{hadoop_daemon_handler}}, it calls {{hadoop_verify_logdir}} even for the > "default" case which calls {{hadoop_start_daemon}}. {{daemon_outfile}} which > specifies the log location isn't even used here, since the command is being > started in the foreground. > I think we can push the {{hadoop_verify_logdir}} call down into > {{hadoop_start_daemon_wrapper}} instead, which does use the outfile. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576241#comment-15576241 ] Hadoop QA commented on HADOOP-13720: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 49 unchanged - 1 fixed = 49 total (was 50) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 53s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 51m 10s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13720 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833425/HADOOP-13720.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f7ef47619f21 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 701c27a | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10792/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10792/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10792/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 >
[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error
[ https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576197#comment-15576197 ] John Zhuge commented on HADOOP-7352: Thanks [~ste...@apache.org]. That'd be great. How do you expect multiple types of exceptions? > FileSystem#listStatus should throw IOE upon access error > > > Key: HADOOP-7352 > URL: https://issues.apache.org/jira/browse/HADOOP-7352 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.0 >Reporter: Matt Foley >Assignee: John Zhuge > Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, > HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch > > > In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should > throw FileNotFoundException instead of returning null, when the target > directory did not exist. > However, in LocalFileSystem implementation today, FileSystem::listStatus > still may return null, when the target directory exists but does not grant > read permission. This causes NPE in many callers, for all the reasons cited > in HADOOP-6201 and HDFS-538. See HADOOP-7327 and its linked issues for > examples. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13721) Remove stale method ViewFileSystem#getTrashCanLocation
[ https://issues.apache.org/jira/browse/HADOOP-13721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576172#comment-15576172 ] Manoj Govindassamy commented on HADOOP-13721: - {{Expunge}} works with {{ViewFileSystem}}. The way Expunge works is {code} @Override protected void processArguments(LinkedList args) throws IOException { FileSystem[] childFileSystems = FileSystem.get(getConf()).getChildFileSystems(); if (null != childFileSystems) { for (FileSystem fs : childFileSystems) { Trash trash = new Trash(fs, getConf()); trash.expunge(); trash.checkpoint(); } } else { ... } {code} Since {{ViewFileSystem}} implements getChildFileSystems() and exports all the mounted file systems, expunge is invoked on the all these mounted filesystems. {noformat} manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls / Found 4 items -r-xr-xr-x - manoj staff 0 2016-10-14 11:30 /nn0 -r-xr-xr-x - manoj staff 0 2016-10-14 11:30 /nn1 -r-xr-xr-x - manoj staff 0 2016-10-14 11:30 /nn2 -r-xr-xr-x - manoj staff 0 2016-10-14 11:30 /nn3 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls /nn0 Found 1 items drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls -R /nn0 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user/manoj manoj@~/work/test/hadev-mg(master)*: hdfs dfs -mkdir -p /nn0/delete/test1 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -mkdir -p /nn0/delete/test2 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls -R /nn0 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/delete/test1 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/delete/test2 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user/manoj manoj@~/work/test/hadev-mg(master)*: hdfs dfs -rm -r /nn0/delete/test1 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls -R /nn0 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:32 /nn0/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/delete/test2 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user drwxr-xr-x - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/Current drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/Current/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/user/manoj/.Trash/Current/delete/test1 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -rm -r -skipTrash /nn0/delete/test2 Deleted /nn0/delete/test2 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls -R /nn0 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:33 /nn0/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user drwxr-xr-x - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/Current drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/Current/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/user/manoj/.Trash/Current/delete/test1 manoj@~/work/test/hadev-mg(master)*: hdfs dfs -expunge manoj@~/work/test/hadev-mg(master)*: hdfs dfs -ls -R /nn0 drwxr-xr-x - manoj supergroup 0 2016-10-14 11:33 /nn0/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:27 /nn0/user drwxr-xr-x - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj drwx-- - manoj supergroup 0 2016-10-14 11:34 /nn0/user/manoj/.Trash drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/161014113451 drwx-- - manoj supergroup 0 2016-10-14 11:32 /nn0/user/manoj/.Trash/161014113451/delete drwxr-xr-x - manoj supergroup 0 2016-10-14 11:31 /nn0/user/manoj/.Trash/161014113451/delete/test1 {noformat} > Remove stale method ViewFileSystem#getTrashCanLocation > -- > > Key: HADOOP-13721 > URL: https://issues.apache.org/jira/browse/HADOOP-13721 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 3.0.0-alpha2 >Reporter: Manoj Govindassamy >Assignee: Manoj Govindassamy >Priority: Minor > Attachments: HADOOP-13721.01.patch > > > {{ViewFileSystem}} which extends
[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9
[ https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576130#comment-15576130 ] Robert Kanter commented on HADOOP-10075: {quote}Is there a reason you changed httpServer.addContext(uiWebAppContext, true); to httpServer.addHandlerAtFront(uiWebAppContext); in ApplicationHistoryServer?{quote} The order of the handlers is important when Jetty is trying to figure out which Servlet to use. To get the AHS to work correctly there, I had to make sure that {{uiWebAppContext}} was first in the list, even though we add it later. Thanks for taking a look Ravi, I know it's a discouraging patch to look at :) I'm currently working on making the charset changes a constant (I had a good idea to make this a little easier), and I'll post an updated patch. I'll also post it on RB to make it easier to comment on; I had trouble finding some of the things Daniel was talking about in his feedback. > Update jetty dependency to version 9 > > > Key: HADOOP-10075 > URL: https://issues.apache.org/jira/browse/HADOOP-10075 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.2.0, 2.6.0 >Reporter: Robert Rati >Assignee: Robert Kanter > Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, > HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, > HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.patch > > > Jetty6 is no longer maintained. Update the dependency to jetty9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HADOOP-13720: --- Labels: supportability (was: ) > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Trivial > Labels: supportability > Attachments: HADOOP-13720.001.patch > > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HADOOP-13720: --- Priority: Trivial (was: Major) Issue Type: Improvement (was: Bug) > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang >Priority: Trivial > Attachments: HADOOP-13720.001.patch > > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576110#comment-15576110 ] Yongjun Zhang commented on HADOOP-13720: Hi [~ste...@apache.org], uploaded a quick patch, would you please help taking a look? thanks. > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HADOOP-13720.001.patch > > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HADOOP-13720: --- Status: Patch Available (was: Open) > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HADOOP-13720.001.patch > > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang updated HADOOP-13720: --- Attachment: HADOOP-13720.001.patch > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > Attachments: HADOOP-13720.001.patch > > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9
[ https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576071#comment-15576071 ] Ravi Prakash commented on HADOOP-10075: --- Thanks a lot for the massive amount of effort Robert and others. The patch is deceptively big. A lot of the bulk is added because of renaming classes or adding charset or just the logging method. Thanks also for writing the maven plugin Robert. Is there a reason you changed {{httpServer.addContext(uiWebAppContext, true);}} to {{httpServer.addHandlerAtFront(uiWebAppContext);}} in ApplicationHistoryServer? The main changes are in HttpServer2 and I am going through them right now. Will hopefully get done soon. > Update jetty dependency to version 9 > > > Key: HADOOP-10075 > URL: https://issues.apache.org/jira/browse/HADOOP-10075 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.2.0, 2.6.0 >Reporter: Robert Rati >Assignee: Robert Kanter > Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, > HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, > HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.patch > > > Jetty6 is no longer maintained. Update the dependency to jetty9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576057#comment-15576057 ] Manoj Govindassamy commented on HADOOP-13055: - * {{TestDataNodeRollingUpgrade.testWithLayoutChangeAndRollback}} Unit test failure is not related to the path. * Will fix the java doc issue along with other review comments. > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Manoj Govindassamy > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, > HADOOP-13055.02.patch, HADOOP-13055.03.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10075) Update jetty dependency to version 9
[ https://issues.apache.org/jira/browse/HADOOP-10075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15576049#comment-15576049 ] Karthik Kambatla commented on HADOOP-10075: --- [~rkanter] - mind throwing this on RB or Github PR for easier review? > Update jetty dependency to version 9 > > > Key: HADOOP-10075 > URL: https://issues.apache.org/jira/browse/HADOOP-10075 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.2.0, 2.6.0 >Reporter: Robert Rati >Assignee: Robert Kanter > Attachments: HADOOP-10075-002-wip.patch, HADOOP-10075.003.patch, > HADOOP-10075.004.patch, HADOOP-10075.005.patch, HADOOP-10075.006.patch, > HADOOP-10075.007.patch, HADOOP-10075.008.patch, HADOOP-10075.patch > > > Jetty6 is no longer maintained. Update the dependency to jetty9. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13706) Update jackson from 1.9.13 to 2.x in hadoop-common-project
[ https://issues.apache.org/jira/browse/HADOOP-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575857#comment-15575857 ] Hadoop QA commented on HADOOP-13706: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 22s{color} | {color:red} hadoop-common-project/hadoop-kms in trunk has 2 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 548 unchanged - 9 fixed = 548 total (was 557) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 40s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 13s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 47m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13706 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833401/HADOOP-13706.02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 9cad0613cab6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbe663d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HADOOP-Build/10791/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-kms-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10791/testReport/ | | modules | C:
[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575848#comment-15575848 ] Yongjun Zhang commented on HADOOP-13720: Nice suggestion [~ste...@apache.org], thanks. > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yongjun Zhang reassigned HADOOP-13720: -- Assignee: Yongjun Zhang > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang >Assignee: Yongjun Zhang > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-8928) Add ability to reset topologies on master nodes
[ https://issues.apache.org/jira/browse/HADOOP-8928?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575847#comment-15575847 ] Steve Loughran commented on HADOOP-8928: I've not done anything with topologies for a long time; I had some in patch ready status but nobody could be bothered to review them, so I gave up > Add ability to reset topologies on master nodes > --- > > Key: HADOOP-8928 > URL: https://issues.apache.org/jira/browse/HADOOP-8928 > Project: Hadoop Common > Issue Type: Improvement > Components: net >Affects Versions: 2.0.2-alpha, 3.0.0-alpha1 >Reporter: Shinichi Yamashita > Labels: BB2015-05-TBR > Attachments: HADOOP-8928.patch, HADOOP-8928.txt > > > For a topology decision of DataNode and TaskTracker, ScriptBasedMapping > (probably TableMapping) confirms HashMap first. > To decide topology of DataNode and TaskTracker again, it is necessary to > restart NameNode and JobTracker. > Therefore, it is necessary to change (or clear) HashMap function without > restarting NameNode and JobTracker. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-7352) FileSystem#listStatus should throw IOE upon access error
[ https://issues.apache.org/jira/browse/HADOOP-7352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575842#comment-15575842 ] Steve Loughran commented on HADOOP-7352: LGTM. FWIW the code coming in HADOOP-13716 is designed to make those tests for raised exceptions in specific parts of the code way easier to write in Java 8 {code} IOException ioe = intercept(IOException.class, () -> { return PathData.expandAsGlob("foo/*", conf); }); {code} > FileSystem#listStatus should throw IOE upon access error > > > Key: HADOOP-7352 > URL: https://issues.apache.org/jira/browse/HADOOP-7352 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.6.0 >Reporter: Matt Foley >Assignee: John Zhuge > Attachments: HADOOP-7352.001.patch, HADOOP-7352.002.patch, > HADOOP-7352.003.patch, HADOOP-7352.004.patch, HADOOP-7352.005.patch > > > In HADOOP-6201 and HDFS-538 it was agreed that FileSystem::listStatus should > throw FileNotFoundException instead of returning null, when the target > directory did not exist. > However, in LocalFileSystem implementation today, FileSystem::listStatus > still may return null, when the target directory exists but does not grant > read permission. This causes NPE in many callers, for all the reasons cited > in HADOOP-6201 and HDFS-538. See HADOOP-7327 and its linked issues for > examples. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13720) Add more info to "token ... is expired" message
[ https://issues.apache.org/jira/browse/HADOOP-13720?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575756#comment-15575756 ] Steve Loughran commented on HADOOP-13720: - I'd add the current time too. Why? Helps identify one of those situations where the VMs clock is totally broken. > Add more info to "token ... is expired" message > --- > > Key: HADOOP-13720 > URL: https://issues.apache.org/jira/browse/HADOOP-13720 > Project: Hadoop Common > Issue Type: Bug > Components: common, security >Reporter: Yongjun Zhang > > Currently AbstractDelegationTokenSecretM anager$checkToken does > {code} > protected DelegationTokenInformation checkToken(TokenIdent identifier) > throws InvalidToken { > assert Thread.holdsLock(this); > DelegationTokenInformation info = getTokenInfo(identifier); > if (info == null) { > throw new InvalidToken("token (" + identifier.toString() > + ") can't be found in cache"); > } > if (info.getRenewDate() < Time.now()) { > throw new InvalidToken("token (" + identifier.toString() + ") is > expired"); > } > return info; > } > {code} > When a token is expried, we throw the above exception without printing out > the {{info.getRenewDate()}} in the message. If we print it out, we could know > for how long the token has not been renewed. This will help us investigate > certain issues. > Create this jira as a request to add that part. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575734#comment-15575734 ] Hadoop QA commented on HADOOP-13703: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 24 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 3s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 30s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 31s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 25s{color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} branch-2 passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} branch-2 passed with JDK v1.7.0_111 {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 17s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 58s{color} | {color:green} the patch passed with JDK v1.8.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s{color} | {color:green} the patch passed with JDK v1.7.0_111 {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 36s{color} | {color:red} root-jdk1.7.0_111 with JDK v1.7.0_111 generated 1 new + 949 unchanged - 1 fixed = 950 total (was 950) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 30s{color} | {color:orange} root: The patch generated 23 new + 46 unchanged - 4 fixed = 69 total (was 50) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 63 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 55s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 18s{color} | {color:red} hadoop-aws in the patch failed with JDK v1.8.0_101. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} hadoop-tools_hadoop-aws-jdk1.7.0_111 with JDK v1.7.0_111 generated 3 new + 0 unchanged - 0 fixed = 3 total (was 0) {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_111. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 29s{color} |
[jira] [Updated] (HADOOP-13706) Update jackson from 1.9.13 to 2.x in hadoop-common-project
[ https://issues.apache.org/jira/browse/HADOOP-13706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-13706: --- Attachment: HADOOP-13706.02.patch Fixed javac warning. The findbugs warning and test failure is not related to the patch. > Update jackson from 1.9.13 to 2.x in hadoop-common-project > -- > > Key: HADOOP-13706 > URL: https://issues.apache.org/jira/browse/HADOOP-13706 > Project: Hadoop Common > Issue Type: Sub-task > Components: build >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka > Attachments: HADOOP-13706.01.patch, HADOOP-13706.02.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13686) Adding additional unit test for Trash (I)
[ https://issues.apache.org/jira/browse/HADOOP-13686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575689#comment-15575689 ] Xiaoyu Yao commented on HADOOP-13686: - Thanks [~cheersyang] for the branch-2.8 patch. I've tested it and commit it to branch-2.8. > Adding additional unit test for Trash (I) > - > > Key: HADOOP-13686 > URL: https://issues.apache.org/jira/browse/HADOOP-13686 > Project: Hadoop Common > Issue Type: Test >Reporter: Xiaoyu Yao >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-13686-branch-2.8.01.patch, HADOOP-13686.01.patch, > HADOOP-13686.02.patch, HADOOP-13686.03.patch, HADOOP-13686.04.patch, > HADOOP-13686.05.patch, HADOOP-13686.06.patch, HADOOP-13686.07.patch > > > This ticket is opened to track adding the forllowing unit test in > hadoop-common. > #test users can delete their own trash directory > #test users can delete an empty directory and the directory is moved to trash > #test fs.trash.interval with invalid values such as 0 or negative -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes
[ https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575663#comment-15575663 ] Wei-Chiu Chuang commented on HADOOP-11798: -- I think the latest patch generally looks to me. One nit is lack of documentation to switch between native v.s. Java-based codec, but that can go into a new jira. > Native raw erasure coder in XOR codes > - > > Key: HADOOP-11798 > URL: https://issues.apache.org/jira/browse/HADOOP-11798 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Fix For: HDFS-7285 > > Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, > HADOOP-11798-v3.patch, HADOOP-11798-v4.patch > > > Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to > recover only one erased block which is in most often case. It can also be > used in HitchHiker coder. Therefore a native implementation of it would be > deserved for performance gain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13660) Upgrade commons-configuration version
[ https://issues.apache.org/jira/browse/HADOOP-13660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575563#comment-15575563 ] Sean Mackrory commented on HADOOP-13660: Thanks [~jojochuang]. I'm taking a look at what it would take to move to 2.x. They've done the release in a way that allows 1.x and 2.x to coexist nicely, but I agree we might as well update now if we reasonably can. > Upgrade commons-configuration version > - > > Key: HADOOP-13660 > URL: https://issues.apache.org/jira/browse/HADOOP-13660 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha2 >Reporter: Sean Mackrory >Assignee: Sean Mackrory > Attachments: HADOOP-13660.001.patch > > > We're currently pulling in version 1.6 - I think we should upgrade to the > latest 1.10. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13716: Attachment: HADOOP-13716-006.patch Patch 006 * switch to ProportionalIncreaseRetryInterval * suppress javac warning * move some more java7 test cases above the "java8 below" section in the test suite > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, HADOOP-13716-006.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13716: Status: Open (was: Patch Available) > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575486#comment-15575486 ] Steve Loughran commented on HADOOP-13716: - regarding the checkstyle indentations, it's saying things I don't agree with. Specifically: {code} intercept(FailFastException.class, () -> await(TIMEOUT, () -> { throw new FailFastException("ffe"); // HERE: should be 12 not 14 }, // HERE: should be 10 not 12 retry, (timeout, ex) -> ex)); {code} so: WONTFIX on those > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13703: Attachment: HADOOP-13560-branch-2-015.patch patch 015; address thomas and pietr's comments .. primarily documentation. * log@error warning if > 10K partitions are uploaded * remove deprecation warnings about option only ever relevant in the fast output stream; as that was always tagged experimental I think we can justify this > S3ABlockOutputStream to pass Yetus & Jenkins > > > Key: HADOOP-13703 > URL: https://issues.apache.org/jira/browse/HADOOP-13703 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, > HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, > HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch > > > The HADOOP-13560 patches and PR has got yetus confused. This patch is purely > to do test runs. > h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull > Request. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13703: Status: Patch Available (was: Open) > S3ABlockOutputStream to pass Yetus & Jenkins > > > Key: HADOOP-13703 > URL: https://issues.apache.org/jira/browse/HADOOP-13703 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, > HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, > HADOOP-13560-branch-2-014.patch, HADOOP-13560-branch-2-015.patch > > > The HADOOP-13560 patches and PR has got yetus confused. This patch is purely > to do test runs. > h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull > Request. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13703) S3ABlockOutputStream to pass Yetus & Jenkins
[ https://issues.apache.org/jira/browse/HADOOP-13703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13703: Status: Open (was: Patch Available) > S3ABlockOutputStream to pass Yetus & Jenkins > > > Key: HADOOP-13703 > URL: https://issues.apache.org/jira/browse/HADOOP-13703 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.3 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-012.patch, HADOOP-13560-branch-2-010.patch, > HADOOP-13560-branch-2-011.patch, HADOOP-13560-branch-2-013.patch, > HADOOP-13560-branch-2-014.patch > > > The HADOOP-13560 patches and PR has got yetus confused. This patch is purely > to do test runs. > h1. All discourse must continue to take place in HADOOP-13560 and/or the Pull > Request. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575361#comment-15575361 ] ASF GitHub Bot commented on HADOOP-13560: - Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83423063 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -118,21 +126,37 @@ private long partSize; private boolean enableMultiObjectsDelete; private TransferManager transfers; - private ExecutorService threadPoolExecutor; + private ListeningExecutorService threadPoolExecutor; private long multiPartThreshold; public static final Logger LOG = LoggerFactory.getLogger(S3AFileSystem.class); + private static final Logger PROGRESS = + LoggerFactory.getLogger("org.apache.hadoop.fs.s3a.S3AFileSystem.Progress"); + private LocalDirAllocator directoryAllocator; private CannedAccessControlList cannedACL; private String serverSideEncryptionAlgorithm; private S3AInstrumentation instrumentation; private S3AStorageStatistics storageStatistics; private long readAhead; private S3AInputPolicy inputPolicy; - private static final AtomicBoolean warnedOfCoreThreadDeprecation = - new AtomicBoolean(false); private final AtomicBoolean closed = new AtomicBoolean(false); // The maximum number of entries that can be deleted in any call to s3 private static final int MAX_ENTRIES_TO_DELETE = 1000; + private boolean blockUploadEnabled; + private String blockOutputBuffer; + private S3ADataBlocks.BlockFactory blockFactory; + private int blockOutputActiveBlocks; + + /* + * Register Deprecated options. + */ + static { +Configuration.addDeprecations(new Configuration.DeprecationDelta[]{ +new Configuration.DeprecationDelta("fs.s3a.threads.core", +null, --- End diff -- I've just cut that section entirely. That's harsh, but, well, it the fast output stream was always marked as experimental ... we've learned from the experiment and are now changing behaviour here, which is something we can look at covering in the release notes. I'll add that to the JIRA. > S3ABlockOutputStream to support huge (many GB) file writes > -- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13560: Hadoop Flags: Incompatible change Release Note: This mechanism replaces the (experimental) fast output stream of Hadoop 2.7.x, combining better scalability options with instrumentation. Consult the S3A documentation to see the extra configuration operations. > S3ABlockOutputStream to support huge (many GB) file writes > -- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575344#comment-15575344 ] ASF GitHub Bot commented on HADOOP-13560: - Github user thodemoor commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83421605 --- Diff: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md --- @@ -881,40 +881,362 @@ Seoul If the wrong endpoint is used, the request may fail. This may be reported as a 301/redirect error, or as a 400 Bad Request. -### S3AFastOutputStream - **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk** - - fs.s3a.fast.upload - false - Upload directly from memory instead of buffering to - disk first. Memory usage and parallelism can be controlled as up to - fs.s3a.multipart.size memory is consumed for each (part)upload actively - uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) - - - fs.s3a.fast.buffer.size - 1048576 - Size (in bytes) of initial memory buffer allocated for an - upload. No effect if fs.s3a.fast.upload is false. - +### Stabilizing: S3A Fast Upload + + +**New in Hadoop 2.7; significantly enhanced in Hadoop 2.9** + + +Because of the nature of the S3 object store, data written to an S3A `OutputStream` +is not written incrementally —instead, by default, it is buffered to disk +until the stream is closed in its `close()` method. + +This can make output slow: + +* The execution time for `OutputStream.close()` is proportional to the amount of data +buffered and inversely proportional to the bandwidth. That is `O(data/bandwidth)`. +* The bandwidth is that available from the host to S3: other work in the same +process, server or network at the time of upload may increase the upload time, +hence the duration of the `close()` call. +* If a process uploading data fails before `OutputStream.close()` is called, +all data is lost. +* The disks hosting temporary directories defined in `fs.s3a.buffer.dir` must +have the capacity to store the entire buffered file. + +Put succinctly: the further the process is from the S3 endpoint, or the smaller +the EC-hosted VM is, the longer it will take work to complete. + +This can create problems in application code: + +* Code often assumes that the `close()` call is fast; + the delays can create bottlenecks in operations. +* Very slow uploads sometimes cause applications to time out. (generally, +threads blocking during the upload stop reporting progress, so trigger timeouts) +* Streaming very large amounts of data may consume all disk space before the upload begins. + + +Work to addess this began in Hadoop 2.7 with the `S3AFastOutputStream` +[HADOOP-11183](https://issues.apache.org/jira/browse/HADOOP-11183), and +has continued with ` S3ABlockOutputStream` +[HADOOP-13560](https://issues.apache.org/jira/browse/HADOOP-13560). + + +This adds an alternative output stream, "S3a Fast Upload" which: + +1. Always uploads large files as blocks with the size set by +`fs.s3a.multipart.size`. That is: the threshold at which multipart uploads +begin and the size of each upload are identical. +1. Buffers blocks to disk (default) or in on-heap or off-heap memory. +1. Uploads blocks in parallel in background threads. +1. Begins uploading blocks as soon as the buffered data exceeds this partition +size. +1. When buffering data to disk, uses the directory/directories listed in +`fs.s3a.buffer.dir`. The size of data which can be buffered is limited +to the available disk space. +1. Generates output statistics as metrics on the filesystem, including +statistics of active and pending block uploads. +1. Has the time to `close()` set by the amount of remaning data to upload, rather +than the total size of the file. + +With incremental writes of blocks, "S3A fast upload" offers an upload +time at least as fast as the "classic" mechanism, with significant benefits +on long-lived output streams, and when very large amounts of data are generated. +The in memory buffering mechanims may also offer speedup when running adjacent to +S3 endpoints, as disks are not used for intermediate data storage. + + +```xml + + fs.s3a.fast.upload + true + +Use the incremental block upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. +The number of threads performing uploads in the filesystem is defined +by fs.s3a.threads.max; the queue of
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575322#comment-15575322 ] ASF GitHub Bot commented on HADOOP-13560: - Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83420236 --- Diff: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md --- @@ -881,40 +881,362 @@ Seoul If the wrong endpoint is used, the request may fail. This may be reported as a 301/redirect error, or as a 400 Bad Request. -### S3AFastOutputStream - **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk** - - fs.s3a.fast.upload - false - Upload directly from memory instead of buffering to - disk first. Memory usage and parallelism can be controlled as up to - fs.s3a.multipart.size memory is consumed for each (part)upload actively - uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) - - - fs.s3a.fast.buffer.size - 1048576 - Size (in bytes) of initial memory buffer allocated for an - upload. No effect if fs.s3a.fast.upload is false. - +### Stabilizing: S3A Fast Upload + + +**New in Hadoop 2.7; significantly enhanced in Hadoop 2.9** + + +Because of the nature of the S3 object store, data written to an S3A `OutputStream` +is not written incrementally —instead, by default, it is buffered to disk +until the stream is closed in its `close()` method. + +This can make output slow: + +* The execution time for `OutputStream.close()` is proportional to the amount of data +buffered and inversely proportional to the bandwidth. That is `O(data/bandwidth)`. +* The bandwidth is that available from the host to S3: other work in the same +process, server or network at the time of upload may increase the upload time, +hence the duration of the `close()` call. +* If a process uploading data fails before `OutputStream.close()` is called, +all data is lost. +* The disks hosting temporary directories defined in `fs.s3a.buffer.dir` must +have the capacity to store the entire buffered file. + +Put succinctly: the further the process is from the S3 endpoint, or the smaller +the EC-hosted VM is, the longer it will take work to complete. + +This can create problems in application code: + +* Code often assumes that the `close()` call is fast; + the delays can create bottlenecks in operations. +* Very slow uploads sometimes cause applications to time out. (generally, +threads blocking during the upload stop reporting progress, so trigger timeouts) +* Streaming very large amounts of data may consume all disk space before the upload begins. + + +Work to addess this began in Hadoop 2.7 with the `S3AFastOutputStream` +[HADOOP-11183](https://issues.apache.org/jira/browse/HADOOP-11183), and +has continued with ` S3ABlockOutputStream` +[HADOOP-13560](https://issues.apache.org/jira/browse/HADOOP-13560). + + +This adds an alternative output stream, "S3a Fast Upload" which: + +1. Always uploads large files as blocks with the size set by +`fs.s3a.multipart.size`. That is: the threshold at which multipart uploads +begin and the size of each upload are identical. +1. Buffers blocks to disk (default) or in on-heap or off-heap memory. +1. Uploads blocks in parallel in background threads. +1. Begins uploading blocks as soon as the buffered data exceeds this partition +size. +1. When buffering data to disk, uses the directory/directories listed in +`fs.s3a.buffer.dir`. The size of data which can be buffered is limited +to the available disk space. +1. Generates output statistics as metrics on the filesystem, including +statistics of active and pending block uploads. +1. Has the time to `close()` set by the amount of remaning data to upload, rather +than the total size of the file. + +With incremental writes of blocks, "S3A fast upload" offers an upload +time at least as fast as the "classic" mechanism, with significant benefits +on long-lived output streams, and when very large amounts of data are generated. +The in memory buffering mechanims may also offer speedup when running adjacent to +S3 endpoints, as disks are not used for intermediate data storage. + + +```xml + + fs.s3a.fast.upload + true + +Use the incremental block upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. +The number of threads performing uploads in the filesystem is defined +by fs.s3a.threads.max; the queue
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575317#comment-15575317 ] ASF GitHub Bot commented on HADOOP-13560: - Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83419925 --- Diff: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md --- @@ -881,40 +881,362 @@ Seoul If the wrong endpoint is used, the request may fail. This may be reported as a 301/redirect error, or as a 400 Bad Request. -### S3AFastOutputStream - **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk** - - fs.s3a.fast.upload - false - Upload directly from memory instead of buffering to - disk first. Memory usage and parallelism can be controlled as up to - fs.s3a.multipart.size memory is consumed for each (part)upload actively - uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) - - - fs.s3a.fast.buffer.size - 1048576 - Size (in bytes) of initial memory buffer allocated for an - upload. No effect if fs.s3a.fast.upload is false. - +### Stabilizing: S3A Fast Upload + + +**New in Hadoop 2.7; significantly enhanced in Hadoop 2.9** + + +Because of the nature of the S3 object store, data written to an S3A `OutputStream` +is not written incrementally —instead, by default, it is buffered to disk +until the stream is closed in its `close()` method. + +This can make output slow: + +* The execution time for `OutputStream.close()` is proportional to the amount of data +buffered and inversely proportional to the bandwidth. That is `O(data/bandwidth)`. +* The bandwidth is that available from the host to S3: other work in the same +process, server or network at the time of upload may increase the upload time, +hence the duration of the `close()` call. +* If a process uploading data fails before `OutputStream.close()` is called, +all data is lost. +* The disks hosting temporary directories defined in `fs.s3a.buffer.dir` must +have the capacity to store the entire buffered file. + +Put succinctly: the further the process is from the S3 endpoint, or the smaller +the EC-hosted VM is, the longer it will take work to complete. + +This can create problems in application code: + +* Code often assumes that the `close()` call is fast; + the delays can create bottlenecks in operations. +* Very slow uploads sometimes cause applications to time out. (generally, +threads blocking during the upload stop reporting progress, so trigger timeouts) +* Streaming very large amounts of data may consume all disk space before the upload begins. + + +Work to addess this began in Hadoop 2.7 with the `S3AFastOutputStream` +[HADOOP-11183](https://issues.apache.org/jira/browse/HADOOP-11183), and +has continued with ` S3ABlockOutputStream` +[HADOOP-13560](https://issues.apache.org/jira/browse/HADOOP-13560). + + +This adds an alternative output stream, "S3a Fast Upload" which: + +1. Always uploads large files as blocks with the size set by +`fs.s3a.multipart.size`. That is: the threshold at which multipart uploads +begin and the size of each upload are identical. +1. Buffers blocks to disk (default) or in on-heap or off-heap memory. +1. Uploads blocks in parallel in background threads. +1. Begins uploading blocks as soon as the buffered data exceeds this partition +size. +1. When buffering data to disk, uses the directory/directories listed in +`fs.s3a.buffer.dir`. The size of data which can be buffered is limited +to the available disk space. +1. Generates output statistics as metrics on the filesystem, including +statistics of active and pending block uploads. +1. Has the time to `close()` set by the amount of remaning data to upload, rather +than the total size of the file. + +With incremental writes of blocks, "S3A fast upload" offers an upload +time at least as fast as the "classic" mechanism, with significant benefits +on long-lived output streams, and when very large amounts of data are generated. +The in memory buffering mechanims may also offer speedup when running adjacent to +S3 endpoints, as disks are not used for intermediate data storage. + + +```xml + + fs.s3a.fast.upload + true + +Use the incremental block upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. +The number of threads performing uploads in the filesystem is defined +by fs.s3a.threads.max; the queue
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575312#comment-15575312 ] ASF GitHub Bot commented on HADOOP-13560: - Github user thodemoor commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83419562 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3ABlockOutputStream.java --- @@ -0,0 +1,699 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.fs.s3a; + +import java.io.IOException; +import java.io.InputStream; +import java.io.OutputStream; +import java.util.ArrayList; +import java.util.List; +import java.util.concurrent.Callable; +import java.util.concurrent.ExecutionException; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.TimeUnit; +import java.util.concurrent.atomic.AtomicBoolean; + +import com.amazonaws.AmazonClientException; +import com.amazonaws.event.ProgressEvent; +import com.amazonaws.event.ProgressEventType; +import com.amazonaws.event.ProgressListener; +import com.amazonaws.services.s3.model.CompleteMultipartUploadResult; +import com.amazonaws.services.s3.model.PartETag; +import com.amazonaws.services.s3.model.PutObjectRequest; +import com.amazonaws.services.s3.model.PutObjectResult; +import com.amazonaws.services.s3.model.UploadPartRequest; +import com.google.common.base.Preconditions; +import com.google.common.util.concurrent.Futures; +import com.google.common.util.concurrent.ListenableFuture; +import com.google.common.util.concurrent.ListeningExecutorService; +import com.google.common.util.concurrent.MoreExecutors; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import org.apache.hadoop.classification.InterfaceAudience; +import org.apache.hadoop.classification.InterfaceStability; +import org.apache.hadoop.io.IOUtils; +import org.apache.hadoop.io.retry.RetryPolicies; +import org.apache.hadoop.io.retry.RetryPolicy; +import org.apache.hadoop.util.Progressable; + +import static org.apache.hadoop.fs.s3a.S3AUtils.*; +import static org.apache.hadoop.fs.s3a.Statistic.*; + +/** + * Upload files/parts directly via different buffering mechanisms: + * including memory and disk. + * + * If the stream is closed and no update has started, then the upload + * is instead done as a single PUT operation. + * + * Unstable: statistics and error handling might evolve. + */ +@InterfaceAudience.Private +@InterfaceStability.Unstable +class S3ABlockOutputStream extends OutputStream { + + private static final Logger LOG = + LoggerFactory.getLogger(S3ABlockOutputStream.class); + + /** Owner FileSystem. */ + private final S3AFileSystem fs; + + /** Object being uploaded. */ + private final String key; + + /** Size of all blocks. */ + private final int blockSize; + + /** Callback for progress. */ + private final ProgressListener progressListener; + private final ListeningExecutorService executorService; + + /** + * Retry policy for multipart commits; not all AWS SDK versions retry that. + */ + private final RetryPolicy retryPolicy = + RetryPolicies.retryUpToMaximumCountWithProportionalSleep( + 5, + 2000, + TimeUnit.MILLISECONDS); + /** + * Factory for blocks. + */ + private final S3ADataBlocks.BlockFactory blockFactory; + + /** Preallocated byte buffer for writing single characters. */ + private final byte[] singleCharWrite = new byte[1]; + + /** Multipart upload details; null means none started. */ + private MultiPartUpload multiPartUpload; + + /** Closed flag. */ + private final AtomicBoolean closed = new AtomicBoolean(false); + + /** Current data block. Null means none currently
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575309#comment-15575309 ] ASF GitHub Bot commented on HADOOP-13560: - Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83419491 --- Diff: hadoop-tools/hadoop-aws/src/site/markdown/tools/hadoop-aws/index.md --- @@ -881,40 +881,362 @@ Seoul If the wrong endpoint is used, the request may fail. This may be reported as a 301/redirect error, or as a 400 Bad Request. -### S3AFastOutputStream - **Warning: NEW in hadoop 2.7. UNSTABLE, EXPERIMENTAL: use at own risk** - - fs.s3a.fast.upload - false - Upload directly from memory instead of buffering to - disk first. Memory usage and parallelism can be controlled as up to - fs.s3a.multipart.size memory is consumed for each (part)upload actively - uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) - - - fs.s3a.fast.buffer.size - 1048576 - Size (in bytes) of initial memory buffer allocated for an - upload. No effect if fs.s3a.fast.upload is false. - +### Stabilizing: S3A Fast Upload + + +**New in Hadoop 2.7; significantly enhanced in Hadoop 2.9** + + +Because of the nature of the S3 object store, data written to an S3A `OutputStream` +is not written incrementally —instead, by default, it is buffered to disk +until the stream is closed in its `close()` method. + +This can make output slow: + +* The execution time for `OutputStream.close()` is proportional to the amount of data +buffered and inversely proportional to the bandwidth. That is `O(data/bandwidth)`. +* The bandwidth is that available from the host to S3: other work in the same +process, server or network at the time of upload may increase the upload time, +hence the duration of the `close()` call. +* If a process uploading data fails before `OutputStream.close()` is called, +all data is lost. +* The disks hosting temporary directories defined in `fs.s3a.buffer.dir` must +have the capacity to store the entire buffered file. + +Put succinctly: the further the process is from the S3 endpoint, or the smaller +the EC-hosted VM is, the longer it will take work to complete. + +This can create problems in application code: + +* Code often assumes that the `close()` call is fast; + the delays can create bottlenecks in operations. +* Very slow uploads sometimes cause applications to time out. (generally, +threads blocking during the upload stop reporting progress, so trigger timeouts) +* Streaming very large amounts of data may consume all disk space before the upload begins. + + +Work to addess this began in Hadoop 2.7 with the `S3AFastOutputStream` +[HADOOP-11183](https://issues.apache.org/jira/browse/HADOOP-11183), and +has continued with ` S3ABlockOutputStream` +[HADOOP-13560](https://issues.apache.org/jira/browse/HADOOP-13560). + + +This adds an alternative output stream, "S3a Fast Upload" which: + +1. Always uploads large files as blocks with the size set by +`fs.s3a.multipart.size`. That is: the threshold at which multipart uploads +begin and the size of each upload are identical. +1. Buffers blocks to disk (default) or in on-heap or off-heap memory. +1. Uploads blocks in parallel in background threads. +1. Begins uploading blocks as soon as the buffered data exceeds this partition +size. +1. When buffering data to disk, uses the directory/directories listed in +`fs.s3a.buffer.dir`. The size of data which can be buffered is limited +to the available disk space. +1. Generates output statistics as metrics on the filesystem, including +statistics of active and pending block uploads. +1. Has the time to `close()` set by the amount of remaning data to upload, rather +than the total size of the file. + +With incremental writes of blocks, "S3A fast upload" offers an upload +time at least as fast as the "classic" mechanism, with significant benefits +on long-lived output streams, and when very large amounts of data are generated. +The in memory buffering mechanims may also offer speedup when running adjacent to +S3 endpoints, as disks are not used for intermediate data storage. + + +```xml + + fs.s3a.fast.upload + true + +Use the incremental block upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. +The number of threads performing uploads in the filesystem is defined +by fs.s3a.threads.max; the queue
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575264#comment-15575264 ] ASF GitHub Bot commented on HADOOP-13560: - Github user pieterreuse commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83416183 --- Diff: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java --- @@ -118,21 +126,37 @@ private long partSize; private boolean enableMultiObjectsDelete; private TransferManager transfers; - private ExecutorService threadPoolExecutor; + private ListeningExecutorService threadPoolExecutor; private long multiPartThreshold; public static final Logger LOG = LoggerFactory.getLogger(S3AFileSystem.class); + private static final Logger PROGRESS = + LoggerFactory.getLogger("org.apache.hadoop.fs.s3a.S3AFileSystem.Progress"); + private LocalDirAllocator directoryAllocator; private CannedAccessControlList cannedACL; private String serverSideEncryptionAlgorithm; private S3AInstrumentation instrumentation; private S3AStorageStatistics storageStatistics; private long readAhead; private S3AInputPolicy inputPolicy; - private static final AtomicBoolean warnedOfCoreThreadDeprecation = - new AtomicBoolean(false); private final AtomicBoolean closed = new AtomicBoolean(false); // The maximum number of entries that can be deleted in any call to s3 private static final int MAX_ENTRIES_TO_DELETE = 1000; + private boolean blockUploadEnabled; + private String blockOutputBuffer; + private S3ADataBlocks.BlockFactory blockFactory; + private int blockOutputActiveBlocks; + + /* + * Register Deprecated options. + */ + static { +Configuration.addDeprecations(new Configuration.DeprecationDelta[]{ +new Configuration.DeprecationDelta("fs.s3a.threads.core", +null, --- End diff -- I'm not familiar with DeprecationDelta's, but this _null_ value gave rise to a nullpointerexception on **all** unit tests when fs.s3a.threads.core was in my config. Replacing this _null_ with _""_ (empty string) resolved my issue, but I'm not 100% sure that is the right thing to do here. > S3ABlockOutputStream to support huge (many GB) file writes > -- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575261#comment-15575261 ] Steve Loughran commented on HADOOP-13716: - # I'll see about suppressing that javac warning # checkstyle is being fussy about indentation in the lambda expressions. Not sure what to do there...maybe it's something the checker isn't ready for yet, or we need to look at its defaults. # Anu: in the productions-sde retry code we have "ProportionalSleep"; I'll use that as the term here too > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575244#comment-15575244 ] ASF GitHub Bot commented on HADOOP-13560: - Github user thodemoor commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83415072 --- Diff: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml --- @@ -1095,10 +1102,50 @@ fs.s3a.fast.upload false - Upload directly from memory instead of buffering to -disk first. Memory usage and parallelism can be controlled as up to -fs.s3a.multipart.size memory is consumed for each (part)upload actively -uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) + +Use the incremental block-based fast upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. + + + + + fs.s3a.fast.upload.buffer + disk + +The buffering mechanism to use when using S3A fast upload +(fs.s3a.fast.upload=true). Values: disk, array, bytebuffer. +This configuration option has no effect if fs.s3a.fast.upload is false. + +"disk" will use the directories listed in fs.s3a.buffer.dir as +the location(s) to save data prior to being uploaded. + +"array" uses arrays in the JVM heap + +"bytebuffer" uses off-heap memory within the JVM. + +Both "array" and "bytebuffer" will consume memory in a single stream up to the number +of blocks set by: + +fs.s3a.multipart.size * fs.s3a.fast.upload.active.blocks. + +If using either of these mechanisms, keep this value low + +The total number of threads performing work across all threads is set by +fs.s3a.threads.max, with fs.s3a.max.total.tasks values setting the number of queued +work items. --- End diff -- Completely agree. A bit further down I propose to add a single explanation in the javadoc and link to there in the various other locations > S3ABlockOutputStream to support huge (many GB) file writes > -- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13560) S3ABlockOutputStream to support huge (many GB) file writes
[ https://issues.apache.org/jira/browse/HADOOP-13560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575234#comment-15575234 ] ASF GitHub Bot commented on HADOOP-13560: - Github user steveloughran commented on a diff in the pull request: https://github.com/apache/hadoop/pull/130#discussion_r83414208 --- Diff: hadoop-common-project/hadoop-common/src/main/resources/core-default.xml --- @@ -1095,10 +1102,50 @@ fs.s3a.fast.upload false - Upload directly from memory instead of buffering to -disk first. Memory usage and parallelism can be controlled as up to -fs.s3a.multipart.size memory is consumed for each (part)upload actively -uploading (fs.s3a.threads.max) or queueing (fs.s3a.max.total.tasks) + +Use the incremental block-based fast upload mechanism with +the buffering mechanism set in fs.s3a.fast.upload.buffer. + + + + + fs.s3a.fast.upload.buffer + disk + +The buffering mechanism to use when using S3A fast upload +(fs.s3a.fast.upload=true). Values: disk, array, bytebuffer. +This configuration option has no effect if fs.s3a.fast.upload is false. + +"disk" will use the directories listed in fs.s3a.buffer.dir as +the location(s) to save data prior to being uploaded. + +"array" uses arrays in the JVM heap + +"bytebuffer" uses off-heap memory within the JVM. + +Both "array" and "bytebuffer" will consume memory in a single stream up to the number +of blocks set by: + +fs.s3a.multipart.size * fs.s3a.fast.upload.active.blocks. + +If using either of these mechanisms, keep this value low + +The total number of threads performing work across all threads is set by +fs.s3a.threads.max, with fs.s3a.max.total.tasks values setting the number of queued +work items. --- End diff -- you know, now that you can have a queue per stream, it could be set to something bigger. This is something we could look at in the docs, leaving out of the XML so as to have a single topic. This phrase here describes the number of active threads, which is different —and will be more so once there's other work (COPY, DELETE) going on there. So: wont change here > S3ABlockOutputStream to support huge (many GB) file writes > -- > > Key: HADOOP-13560 > URL: https://issues.apache.org/jira/browse/HADOOP-13560 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13560-branch-2-001.patch, > HADOOP-13560-branch-2-002.patch, HADOOP-13560-branch-2-003.patch, > HADOOP-13560-branch-2-004.patch > > > An AWS SDK [issue|https://github.com/aws/aws-sdk-java/issues/367] highlights > that metadata isn't copied on large copies. > 1. Add a test to do that large copy/rname and verify that the copy really > works > 2. Verify that metadata makes it over. > Verifying large file rename is important on its own, as it is needed for very > large commit operations for committers using rename -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575205#comment-15575205 ] Hadoop QA commented on HADOOP-13716: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 5s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 48s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 48s{color} | {color:red} root generated 1 new + 701 unchanged - 1 fixed = 702 total (was 702) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 41s{color} | {color:orange} root: The patch generated 9 new + 25 unchanged - 0 fixed = 34 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 75m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13716 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833351/HADOOP-13716-005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 962c60cb8391 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbe663d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10788/artifact/patchprocess/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10788/artifact/patchprocess/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10788/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output |
[jira] [Commented] (HADOOP-13692) hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent classpath conflicts.
[ https://issues.apache.org/jira/browse/HADOOP-13692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575140#comment-15575140 ] Steve Loughran commented on HADOOP-13692: - just an FYI, this broke *my own* unit tests on the SPARK-1481 branch {code} 2016-10-13 18:51:13,888 [ScalaTest-main] INFO cloud.CloudSuite (Logging.scala:logInfo(54)) - Loading configuration from ../../cloud.xml 2016-10-13 18:51:14,214 [ScalaTest-main] WARN util.NativeCodeLoader (NativeCodeLoader.java:(62)) - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable *** RUN ABORTED *** java.lang.NoSuchMethodError: com.fasterxml.jackson.core.JsonFactory.requiresPropertyOrdering()Z at com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:537) at com.fasterxml.jackson.databind.ObjectMapper.(ObjectMapper.java:448) at com.amazonaws.util.json.Jackson.(Jackson.java:32) at com.amazonaws.internal.config.InternalConfig.loadfrom(InternalConfig.java:230) at com.amazonaws.internal.config.InternalConfig.load(InternalConfig.java:247) at com.amazonaws.internal.config.InternalConfig$Factory.(InternalConfig.java:282) at com.amazonaws.util.VersionInfoUtils.userAgent(VersionInfoUtils.java:139) at com.amazonaws.util.VersionInfoUtils.initializeUserAgent(VersionInfoUtils.java:134) at com.amazonaws.util.VersionInfoUtils.getUserAgent(VersionInfoUtils.java:95) at com.amazonaws.ClientConfiguration.(ClientConfiguration.java:42) ... [INFO] [INFO] BUILD FAILURE {code} But this problem goes away in spark-assembly, the release of spark, etc. Purely this module. Which is why I didn't catch this earlier as the system integration tests were all happy. cause: # there's a newer version of jackson in use in spark (2.6.5) # which overrides the declarations of {{jackson-annotations}} and {{jackson-databind}} under hadoop-aws # and which have transitive dependencies on jackson-common. # the explicitdeclaration of jackson-common has pulled that reference one step up the dependency graph (i.e. from under spark-cloud/hadoop-aws/amazon-aws/jackson-common.jar) to spark-cloud/hadoop-aws/jackson-common.jar. # which gives the hadoop-aws version precedence over the one transitively referenced by the (overridde) jackson-annotations, pulled in directly from spark-core JAR. # so creating a version inconsistency which surfaces during test runs. The problem isn't in spark-assembly.jar as it refers to spark-core jar directly, plcking that version up instead. Essentially: the fact that maven uses closest-version first in its version resolution policy means that the depth of transitive dependencies controls whether things run or not; the explicit declaration of the dependency was enough to cause this to surface. Fix: explicitly exclude the hadoop-aws jackson dependencies, as was already done for hadoop-azure. This is not me faulting my own work (how would I!), only showing that you do need to be careful across projects as to what transitive stuff you pull in, as it turns out to be incredibly brittle. We didn't change the jackson version here, only made that choice explicit, and a downstream test suite fails. > hadoop-aws should declare explicit dependency on Jackson 2 jars to prevent > classpath conflicts. > --- > > Key: HADOOP-13692 > URL: https://issues.apache.org/jira/browse/HADOOP-13692 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13692-branch-2.001.patch > > > If an end user's application has a dependency on hadoop-aws and no other > Hadoop artifacts, then it picks up a transitive dependency on Jackson 2.5.3 > jars through the AWS SDK. This can cause conflicts at deployment time, > because Hadoop has a dependency on version 2.2.3, and the 2 versions are not > compatible with one another. We can prevent this problem by changing > hadoop-aws to declare explicit dependencies on the Jackson artifacts, at the > version Hadoop wants. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes
[ https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15575039#comment-15575039 ] Hadoop QA commented on HADOOP-11798: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 59s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-11798 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833344/HADOOP-11798-v4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc findbugs checkstyle | | uname | Linux b9da617a3995 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbe663d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10787/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10787/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Native raw erasure coder in XOR codes > - > > Key: HADOOP-11798 > URL: https://issues.apache.org/jira/browse/HADOOP-11798 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Kai Zheng >
[jira] [Updated] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13716: Attachment: HADOOP-13716-005.patch HADOOP-13716 patch 005 * rename methods to be more consistent with GTU.waitFor and scalatest * reorder params to put lambda expressions at end of methods * add detailed javadocs with examples, as this whole lambda-expression test stuff is new to the codebase * try to address checkstyle issues * move all java 8 test cases to same section of test suite * and rename everything to list entire set of closures * plus fix where java7 didn't compile AbstractContractRootDirectoryTest > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13716) Add LambdaTestUtils class for tests; fix eventual consistency problem in contract test setup
[ https://issues.apache.org/jira/browse/HADOOP-13716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13716: Status: Patch Available (was: Open) > Add LambdaTestUtils class for tests; fix eventual consistency problem in > contract test setup > > > Key: HADOOP-13716 > URL: https://issues.apache.org/jira/browse/HADOOP-13716 > Project: Hadoop Common > Issue Type: New Feature > Components: test >Affects Versions: 2.8.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13716-001.patch, HADOOP-13716-002.patch, > HADOOP-13716-003.patch, HADOOP-13716-005.patch, > HADOOP-13716-branch-2-004.patch > > > To make our tests robust against timing problems and eventual consistent > stores, we need to do more spin & wait for state. > We have some code in {{GenericTestUtils.waitFor}} to await a condition being > met, but the predicate it calls doesn't throw exceptions, there's no way for > a probe to throw an exception, and all you get is the eventual "timed out" > message. > We can do better, and in closure-ready languages (scala & scalatest, groovy > and some slider code) we've examples to follow. Some of that work has been > reimplemented slightly in {{S3ATestUtils.eventually}} > I propose adding a class in the test tree, {{Eventually}} to be a > successor/replacement for these. > # has an eventually/waitfor operation taking a predicate that throws an > exception > # has an "evaluate" exception which tries to evaluate an answer until the > operation stops raising an exception. (again, from scalatest) > # plugin backoff strategies (from Scalatest; lets you do exponential as well > as linear) > # option of adding a special handler to generate the failure exception (e.g. > run more detailed diagnostics for the exception text, etc). > # be Java 8 lambda expression friendly > # be testable and tested itself. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11798) Native raw erasure coder in XOR codes
[ https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HADOOP-11798: --- Attachment: HADOOP-11798-v4.patch Fix ASF and style check issue > Native raw erasure coder in XOR codes > - > > Key: HADOOP-11798 > URL: https://issues.apache.org/jira/browse/HADOOP-11798 > Project: Hadoop Common > Issue Type: Sub-task > Components: io >Reporter: Kai Zheng >Assignee: SammiChen > Labels: hdfs-ec-3.0-must-do > Fix For: HDFS-7285 > > Attachments: HADOOP-11798-v1.patch, HADOOP-11798-v2.patch, > HADOOP-11798-v3.patch, HADOOP-11798-v4.patch > > > Raw XOR coder is utilized in Reed-Solomon erasure coder in an optimization to > recover only one erased block which is in most often case. It can also be > used in HitchHiker coder. Therefore a native implementation of it would be > deserved for performance gain. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574872#comment-15574872 ] Hadoop QA commented on HADOOP-13037: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 51 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 7s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 7s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 7s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 6s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvneclipse {color} | {color:red} 0m 6s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 7s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 6s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 7s{color} | {color:red} hadoop-azure-datalake in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-13037 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/1282/HADOOP-13037-001.patch | | Optional Tests | asflicense findbugs xml compile javac javadoc mvninstall mvnsite unit checkstyle | | uname | Linux bb9edc29ca88 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbe663d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | mvninstall | https://builds.apache.org/job/PreCommit-HADOOP-Build/10786/artifact/patchprocess/patch-mvninstall-hadoop-tools_hadoop-azure-datalake.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/10786/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-azure-datalake.txt | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10786/artifact/patchprocess/patch-compile-hadoop-tools_hadoop-azure-datalake.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HADOOP-Build/10786/artifact/patchprocess/patch-mvnsite-hadoop-tools_hadoop-azure-datalake.txt | | mvneclipse |
[jira] [Commented] (HADOOP-11798) Native raw erasure coder in XOR codes
[ https://issues.apache.org/jira/browse/HADOOP-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574860#comment-15574860 ] Hadoop QA commented on HADOOP-11798: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 5s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 21s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HADOOP-11798 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12833328/HADOOP-11798-v3.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc findbugs checkstyle | | uname | Linux d905536c6e03 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / dbe663d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10785/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10785/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HADOOP-Build/10785/artifact/patchprocess/patch-asflicense-problems.txt | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10785/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message
[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishwajeet Dusane updated HADOOP-13037: --- Attachment: HADOOP-13037-001.patch - The dependency Adl SDK in process to get submitted in to Maven repository. Once the submission is through, will update this JIRA. - We have added additional test cases on top of existing test submitted part of the HADOOP-12875 - I have also incorporated review comments from the HADOOP-13257. As suggested by [~chris.douglas] - Updated patch against trunk instead of feature branch. > Azure Data Lake Client: Support Azure data lake as a file system in Hadoop > -- > > Key: HADOOP-13037 > URL: https://issues.apache.org/jira/browse/HADOOP-13037 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/azure, tools >Reporter: Shrikant Naidu >Assignee: Vishwajeet Dusane > Fix For: 2.9.0 > > Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch > > > The jira proposes an improvement over HADOOP-12666 to remove webhdfs > dependencies from the ADL file system client and build out a standalone > client. At a high level, this approach would extend the Hadoop file system > class to provide an implementation for accessing Azure Data Lake. The scheme > used for accessing the file system will continue to be > adl://.azuredatalake.net/path/to/file. > The Azure Data Lake Cloud Store will continue to provide a webHDFS rest > interface. The client will access the ADLS store using WebHDFS Rest APIs > provided by the ADLS store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishwajeet Dusane updated HADOOP-13037: --- Status: Patch Available (was: Open) > Azure Data Lake Client: Support Azure data lake as a file system in Hadoop > -- > > Key: HADOOP-13037 > URL: https://issues.apache.org/jira/browse/HADOOP-13037 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/azure, tools >Reporter: Shrikant Naidu >Assignee: Vishwajeet Dusane > Fix For: 2.9.0 > > Attachments: HADOOP-13037 Proposal.pdf, HADOOP-13037-001.patch > > > The jira proposes an improvement over HADOOP-12666 to remove webhdfs > dependencies from the ADL file system client and build out a standalone > client. At a high level, this approach would extend the Hadoop file system > class to provide an implementation for accessing Azure Data Lake. The scheme > used for accessing the file system will continue to be > adl://.azuredatalake.net/path/to/file. > The Azure Data Lake Cloud Store will continue to provide a webHDFS rest > interface. The client will access the ADLS store using WebHDFS Rest APIs > provided by the ADLS store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13037) Azure Data Lake Client: Support Azure data lake as a file system in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-13037?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vishwajeet Dusane updated HADOOP-13037: --- Status: Open (was: Patch Available) > Azure Data Lake Client: Support Azure data lake as a file system in Hadoop > -- > > Key: HADOOP-13037 > URL: https://issues.apache.org/jira/browse/HADOOP-13037 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, fs/azure, tools >Reporter: Shrikant Naidu >Assignee: Vishwajeet Dusane > Fix For: 2.9.0 > > Attachments: HADOOP-13037 Proposal.pdf > > > The jira proposes an improvement over HADOOP-12666 to remove webhdfs > dependencies from the ADL file system client and build out a standalone > client. At a high level, this approach would extend the Hadoop file system > class to provide an implementation for accessing Azure Data Lake. The scheme > used for accessing the file system will continue to be > adl://.azuredatalake.net/path/to/file. > The Azure Data Lake Cloud Store will continue to provide a webHDFS rest > interface. The client will access the ADLS store using WebHDFS Rest APIs > provided by the ADLS store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org