[jira] [Assigned] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng reassigned HADOOP-14964: -- Assignee: SammiChen (was: Genmao Yu) > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: SammiChen > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247127#comment-16247127 ] Kai Zheng commented on HADOOP-14964: Thanks Sammi! It LGTM and +1. If no further comments until tomorrow, I will commit the consolidated patch into branch-2 writing the commit message with above commits history info. > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-10768) Optimize Hadoop RPC encryption performance
[ https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dapeng Sun updated HADOOP-10768: Attachment: HADOOP-10768.004.patch > Optimize Hadoop RPC encryption performance > -- > > Key: HADOOP-10768 > URL: https://issues.apache.org/jira/browse/HADOOP-10768 > Project: Hadoop Common > Issue Type: Improvement > Components: performance, security >Affects Versions: 3.0.0-alpha1 >Reporter: Yi Liu >Assignee: Dapeng Sun > Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, > HADOOP-10768.003.patch, HADOOP-10768.004.patch, Optimize Hadoop RPC > encryption performance.pdf > > > Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to > "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for > secure authentication and data protection. Even {{GSSAPI}} supports using > AES, but without AES-NI support by default, so the encryption is slow and > will become bottleneck. > After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the > same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup. > On the other hand, RPC message is small, but RPC is frequent and there may be > lots of RPC calls in one connection, we needs to setup benchmark to see real > improvement and then make a trade-off. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247087#comment-16247087 ] SammiChen commented on HADOOP-14964: I have double checked failed UTs. {{TestDistCacheEmulation}}, {{TestIntegration}} and {{TestDistCpViewFs}} will always fail with or without this patch. Other tests are all passed locally. All test cases are not relevant. > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247085#comment-16247085 ] SammiChen edited comment on HADOOP-14964 at 11/10/17 6:43 AM: -- Hi Kai, here is Hadoop 3.0 OSS commit history involved in the patch, {noformat} HADOOP-14787. AliyunOSS: Implement the `createNonRecursive` operator. HADOOP-14649. Update aliyun-sdk-oss version to 2.8.1. (Genmao Yu via rchiang) HADOOP-14194. Aliyun OSS should not use empty endpoint as default. Contributed by Genmao Yu HADOOP-14466. Remove useless document from TestAliyunOSSFileSystemContract.java. Contributed by Chen Liang. HADOOP-14458. Add missing imports to TestAliyunOSSFileSystemContract.java. Contributed by Mingliang Liu. HADOOP-14192. AliyunOSS FileSystem contract test should implement getTestBaseDir(). Contributed by Mingliang Liu HADOOP-14072. AliyunOSS: Failed to read from stream when seek beyond the download size. Contributed by Genmao Yu HADOOP-13769. AliyunOSS: update oss sdk version. Contributed by Genmao Yu HADOOP-14069. AliyunOSS: listStatus returns wrong file info. Contributed by Fei Hui HADOOP-13768. AliyunOSS: handle the failure in the batch delete operation `deleteDirs`. Contributed by Genmao Yu HADOOP-14065. AliyunOSS: oss directory filestatus should use meta time. Contributed by Fei Hui HADOOP-14045. Aliyun OSS documentation missing from website. Contributed by Yiqun Lin. HADOOP-13723. AliyunOSSInputStream#read() should update read bytes stat correctly. Contributed by Mingliang Liu HADOOP-13624. Rename TestAliyunOSSContractDispCp. Contributed by Genmao Yu HADOOP-13591. Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir. Contributed by Genmao Yu HADOOP-13481. User documents for Aliyun OSS FileSystem. Contributed by Genmao Yu. HADOOP-12756. Incorporate Aliyun OSS file system implementation. Contributed by Mingfei Shi and Lin Zhou {noformat} was (Author: sammi): Hi Kai, heere is Hadoop 3.0 OSS commit history involved in the patch, HADOOP-14787. AliyunOSS: Implement the `createNonRecursive` operator. HADOOP-14649. Update aliyun-sdk-oss version to 2.8.1. (Genmao Yu via rchiang) HADOOP-14194. Aliyun OSS should not use empty endpoint as default. Contributed by Genmao Yu HADOOP-14466. Remove useless document from TestAliyunOSSFileSystemContract.java. Contributed by Chen Liang. HADOOP-14458. Add missing imports to TestAliyunOSSFileSystemContract.java. Contributed by Mingliang Liu. HADOOP-14192. AliyunOSS FileSystem contract test should implement getTestBaseDir(). Contributed by Mingliang Liu HADOOP-14072. AliyunOSS: Failed to read from stream when seek beyond the download size. Contributed by Genmao Yu HADOOP-13769. AliyunOSS: update oss sdk version. Contributed by Genmao Yu HADOOP-14069. AliyunOSS: listStatus returns wrong file info. Contributed by Fei Hui HADOOP-13768. AliyunOSS: handle the failure in the batch delete operation `deleteDirs`. Contributed by Genmao Yu HADOOP-14065. AliyunOSS: oss directory filestatus should use meta time. Contributed by Fei Hui HADOOP-14045. Aliyun OSS documentation missing from website. Contributed by Yiqun Lin. HADOOP-13723. AliyunOSSInputStream#read() should update read bytes stat correctly. Contributed by Mingliang Liu HADOOP-13624. Rename TestAliyunOSSContractDispCp. Contributed by Genmao Yu HADOOP-13591. Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir. Contributed by Genmao Yu HADOOP-13481. User documents for Aliyun OSS FileSystem. Contributed by Genmao Yu. HADOOP-12756. Incorporate Aliyun OSS file system implementation. Contributed by Mingfei Shi and Lin Zhou > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247085#comment-16247085 ] SammiChen commented on HADOOP-14964: Hi Kai, heere is Hadoop 3.0 OSS commit history involved in the patch, HADOOP-14787. AliyunOSS: Implement the `createNonRecursive` operator. HADOOP-14649. Update aliyun-sdk-oss version to 2.8.1. (Genmao Yu via rchiang) HADOOP-14194. Aliyun OSS should not use empty endpoint as default. Contributed by Genmao Yu HADOOP-14466. Remove useless document from TestAliyunOSSFileSystemContract.java. Contributed by Chen Liang. HADOOP-14458. Add missing imports to TestAliyunOSSFileSystemContract.java. Contributed by Mingliang Liu. HADOOP-14192. AliyunOSS FileSystem contract test should implement getTestBaseDir(). Contributed by Mingliang Liu HADOOP-14072. AliyunOSS: Failed to read from stream when seek beyond the download size. Contributed by Genmao Yu HADOOP-13769. AliyunOSS: update oss sdk version. Contributed by Genmao Yu HADOOP-14069. AliyunOSS: listStatus returns wrong file info. Contributed by Fei Hui HADOOP-13768. AliyunOSS: handle the failure in the batch delete operation `deleteDirs`. Contributed by Genmao Yu HADOOP-14065. AliyunOSS: oss directory filestatus should use meta time. Contributed by Fei Hui HADOOP-14045. Aliyun OSS documentation missing from website. Contributed by Yiqun Lin. HADOOP-13723. AliyunOSSInputStream#read() should update read bytes stat correctly. Contributed by Mingliang Liu HADOOP-13624. Rename TestAliyunOSSContractDispCp. Contributed by Genmao Yu HADOOP-13591. Unit test failure in TestOSSContractGetFileStatus and TestOSSContractRootDir. Contributed by Genmao Yu HADOOP-13481. User documents for Aliyun OSS FileSystem. Contributed by Genmao Yu. HADOOP-12756. Incorporate Aliyun OSS file system implementation. Contributed by Mingfei Shi and Lin Zhou > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15027) Improvements for Hadoop read from AliyunOSS
wujinhu created HADOOP-15027: Summary: Improvements for Hadoop read from AliyunOSS Key: HADOOP-15027 URL: https://issues.apache.org/jira/browse/HADOOP-15027 Project: Hadoop Common Issue Type: Improvement Components: fs/oss Reporter: wujinhu Currently, read performance is poor when Hadoop reads from AliyunOSS. It needs about 1min to read 1GB from OSS. Class AliyunOSSInputStream uses single thread to read data from AliyunOSS, so we can refactor this by using multi-thread pre read to improve this. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14964) AliyunOSS: backport Aliyun OSS module to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-14964: --- Summary: AliyunOSS: backport Aliyun OSS module to branch-2 (was: AliyunOSS: backport HADOOP-12756 to branch-2) > AliyunOSS: backport Aliyun OSS module to branch-2 > - > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247055#comment-16247055 ] Kai Zheng commented on HADOOP-14964: Hi Sammi, Thanks for your post! Another question, could you list the JIRAs that this consolidated patch contains? Thanks! I thought it's more than HADOOP-12756. I will update the title. > AliyunOSS: backport HADOOP-12756 to branch-2 > > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14964) AliyunOSS: backport HADOOP-12756 to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-14964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247042#comment-16247042 ] SammiChen commented on HADOOP-14964: Hi [~drankye], all local tests are passed. Following is the result, {quote} --- T E S T S --- Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate Tests run: 11, Failures: 0, Errors: 0, Skipped: 2, Time elapsed: 6.262 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractCreate Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.343 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDelete Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 32.109 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractDistCp Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.648 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractGetFileStatus Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.186 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractMkdir Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.678 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractOpen Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.074 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRename Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.794 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractRootDir Running org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.865 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestAliyunOSSContractSeek Running org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.591 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunCredentials Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract Tests run: 35, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 15.08 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemContract Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 23.455 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSFileSystemStore Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.628 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSInputStream Running org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.359 sec - in org.apache.hadoop.fs.aliyun.oss.TestAliyunOSSOutputStream Results : Tests run: 135, Failures: 0, Errors: 0, Skipped: 2 {quote} > AliyunOSS: backport HADOOP-12756 to branch-2 > > > Key: HADOOP-14964 > URL: https://issues.apache.org/jira/browse/HADOOP-14964 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: Genmao Yu >Assignee: Genmao Yu > Attachments: HADOOP-14964-branch-2.000.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16247032#comment-16247032 ] Hudson commented on HADOOP-14960: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13218 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13218/]) HADOOP-14960. Add GC time percentage monitor/alerter. Contributed by (xiao: rev 3c6adda291745c592741b87cd613214ae11887e4) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetrics.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/source/JvmMetricsInfo.java * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GcTimeMonitor.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/source/TestJvmMetrics.java > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Fix For: 3.0.0, 2.10.0 > > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14960: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.10.0 3.0.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.0 and branch-2. The last checkstyle was fixed at commit time. Thanks Misha for the contribution! > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Fix For: 3.0.0, 2.10.0 > > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246756#comment-16246756 ] Hadoop QA commented on HADOOP-14960: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 17s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 29 unchanged - 4 fixed = 30 total (was 33) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 87m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14960 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896943/HADOOP-14960.04.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4dd747c8dda4 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1883a00 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13658/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13658/testReport/ | | Max. process+thread count | 1439 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13658/console | | Powered by | Apache Yetus
[jira] [Commented] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used
[ https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246720#comment-16246720 ] Chris Douglas commented on HADOOP-8522: --- bq. One thing that caught my eye was the decision to make the methods out the GZipOutputStream synchronized The supertype ({{GZIPOutputStream}}) synchronizes these methods, so the patch synchronizes consistently with it. > ResetableGzipOutputStream creates invalid gzip files when finish() and > resetState() are used > > > Key: HADOOP-8522 > URL: https://issues.apache.org/jira/browse/HADOOP-8522 > Project: Hadoop Common > Issue Type: Bug > Components: io >Affects Versions: 1.0.3, 2.0.0-alpha >Reporter: Mike Percy >Assignee: Mike Percy > Labels: BB2015-05-TBR > Attachments: HADOOP-8522-4.patch, HADOOP-8522.05.patch, > HADOOP-8522.06.patch, HADOOP-8522.07.patch > > > ResetableGzipOutputStream creates invalid gzip files when finish() and > resetState() are used. The issue is that finish() flushes the compressor > buffer and writes the gzip CRC32 + data length trailer. After that, > resetState() does not repeat the gzip header, but simply starts writing more > deflate-compressed data. The resultant files are not readable by the Linux > "gunzip" tool. ResetableGzipOutputStream should write valid multi-member gzip > files. > The gzip format is specified in [RFC > 1952|https://tools.ietf.org/html/rfc1952]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization
[ https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246681#comment-16246681 ] Bharat Viswanadham commented on HADOOP-9747: Hi [~daryn] Thank You for providing patch. One comment from me, is this patch removed the flag HADOOP_TREAT_SUBJECT_EXTERNAL_KEY. So, this configuration need to be removed from the CommonConfigurations.java and also from core-default.xml And also could you rebase your patch, as this is not cleanly applying to trunk. > Reduce unnecessary UGI synchronization > -- > > Key: HADOOP-9747 > URL: https://issues.apache.org/jira/browse/HADOOP-9747 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Attachments: HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, > HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch > > > Jstacks of heavily loaded NNs show up to dozens of threads blocking in the > UGI. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-7410) Mavenize common RPM/DEB
[ https://issues.apache.org/jira/browse/HADOOP-7410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer resolved HADOOP-7410. -- Resolution: Won't Fix > Mavenize common RPM/DEB > --- > > Key: HADOOP-7410 > URL: https://issues.apache.org/jira/browse/HADOOP-7410 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Alejandro Abdelnur >Assignee: Eric Yang > > Mavenize RPM/DEB generation -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246631#comment-16246631 ] Hadoop QA commented on HADOOP-14128: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 9m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 1s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 42s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 102 unchanged - 0 fixed = 104 total (was 102) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 48s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}103m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14128 | | GITHUB PR | https://github.com/apache/hadoop/pull/290 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b632ecd1abab 3.13.0-133-generic #182-Ubuntu SMP Tue Sep 19 15:49:21 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2c150a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13657/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/13657/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13657/testReport/ | | Max. process+thread count | 1432 (vs. ulimit of 5000) | | modules | C:
[jira] [Commented] (HADOOP-14976) Allow overriding HADOOP_SHELL_EXECNAME
[ https://issues.apache.org/jira/browse/HADOOP-14976?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246594#comment-16246594 ] Hadoop QA commented on HADOOP-14976: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 41s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 32s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 1m 20s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 8s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 57s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 9s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 14s{color} | {color:green} hadoop-yarn in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 34s{color} | {color:green} hadoop-mapreduce-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 85m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14976 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12896926/HADOOP-14976.04.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 1936a54d67d6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a2c150a | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13656/testReport/ | | Max. process+thread count | 297 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs hadoop-yarn-project/hadoop-yarn hadoop-mapreduce-project U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13656/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow overriding HADOOP_SHELL_EXECNAME > -- > > Key: HADOOP-14976 > URL: https://issues.apache.org/jira/browse/HADOOP-14976 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HADOOP-14976.01.patch, HADOOP-14976.02.patch, > HADOOP-14976.03.patch,
[jira] [Commented] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246557#comment-16246557 ] Xiao Chen commented on HADOOP-14960: Thanks Misha for the quick turnaround. +1 on patch 04 pending jenkins. > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14982) Clients using FailoverOnNetworkExceptionRetry can go into a loop if they're used without authenticating with kerberos in HA env
[ https://issues.apache.org/jira/browse/HADOOP-14982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246503#comment-16246503 ] Robert Kanter commented on HADOOP-14982: LGTM +1 Any other comments [~daryn]? > Clients using FailoverOnNetworkExceptionRetry can go into a loop if they're > used without authenticating with kerberos in HA env > --- > > Key: HADOOP-14982 > URL: https://issues.apache.org/jira/browse/HADOOP-14982 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Peter Bacsko >Assignee: Peter Bacsko > Attachments: HADOOP-14892-001.patch, HADOOP-14892-002.patch, > HADOOP-14982-003.patch > > > If HA is configured for the Resource Manager in a secure environment, using > the mapred client goes into a loop if the user is not authenticated with > Kerberos. > {noformat} > [root@pb6sec-1 ~]# mapred job -list > 17/10/25 06:37:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm36 > 17/10/25 06:37:43 WARN ipc.Client: Exception encountered while connecting to > the server : javax.security.sasl.SaslException: GSS initiate failed [Caused > by GSSException: No valid credentials provided (Mechanism level: Failed to > find any Kerberos tgt)] > 17/10/25 06:37:43 INFO retry.RetryInvocationHandler: java.io.IOException: > Failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]; Host Details : local host is: > "host_redacted/IP_redacted"; destination host is: "com.host2.redacted:8032; , > while invoking ApplicationClientProtocolPBClientImpl.getApplications over > rm36 after 1 failover attempts. Trying to failover after sleeping for 160ms. > 17/10/25 06:37:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm25 > 17/10/25 06:37:43 INFO retry.RetryInvocationHandler: > java.net.ConnectException: Call From host_redacted/IP_redacted to > com.host.redacted:8032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused, while invoking > ApplicationClientProtocolPBClientImpl.getApplications over rm25 after 2 > failover attempts. Trying to failover after sleeping for 582ms. > 17/10/25 06:37:44 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm36 > 17/10/25 06:37:44 WARN ipc.Client: Exception encountered while connecting to > the server : javax.security.sasl.SaslException: GSS initiate failed [Caused > by GSSException: No valid credentials provided (Mechanism level: Failed to > find any Kerberos tgt)] > 17/10/25 06:37:44 INFO retry.RetryInvocationHandler: java.io.IOException: > Failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]; Host Details : local host is: > "host_redacted/IP_redacted"; destination host is: "com.host2.redacted:8032; , > while invoking ApplicationClientProtocolPBClientImpl.getApplications over > rm36 after 3 failover attempts. Trying to failover after sleeping for 977ms. > 17/10/25 06:37:45 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm25 > 17/10/25 06:37:45 INFO retry.RetryInvocationHandler: > java.net.ConnectException: Call From host_redacted/IP_redacted to > com.host.redacted:8032 failed on connection exception: > java.net.ConnectException: Connection refused; For more details see: > http://wiki.apache.org/hadoop/ConnectionRefused, while invoking > ApplicationClientProtocolPBClientImpl.getApplications over rm25 after 4 > failover attempts. Trying to failover after sleeping for 1667ms. > 17/10/25 06:37:46 INFO client.ConfiguredRMFailoverProxyProvider: Failing over > to rm36 > 17/10/25 06:37:46 WARN ipc.Client: Exception encountered while connecting to > the server : javax.security.sasl.SaslException: GSS initiate failed [Caused > by GSSException: No valid credentials provided (Mechanism level: Failed to > find any Kerberos tgt)] > 17/10/25 06:37:46 INFO retry.RetryInvocationHandler: java.io.IOException: > Failed on local exception: java.io.IOException: > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: Failed to find > any Kerberos tgt)]; Host Details : local host is: > "host_redacted/IP_redacted"; destination host is: "com.host2.redacted:8032; , > while invoking ApplicationClientProtocolPBClientImpl.getApplications over > rm36 after 5 failover attempts. Trying to failover after
[jira] [Updated] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HADOOP-14960: Attachment: HADOOP-14960.04.patch > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HADOOP-14960: Status: In Progress (was: Patch Available) > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14960) Add GC time percentage monitor/alerter
[ https://issues.apache.org/jira/browse/HADOOP-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Misha Dmitriev updated HADOOP-14960: Status: Patch Available (was: In Progress) Addressed the latest Xiao's comments. > Add GC time percentage monitor/alerter > -- > > Key: HADOOP-14960 > URL: https://issues.apache.org/jira/browse/HADOOP-14960 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Misha Dmitriev >Assignee: Misha Dmitriev > Attachments: HADOOP-14960.01.patch, HADOOP-14960.02.patch, > HADOOP-14960.03.patch, HADOOP-14960.04.patch > > > Currently class {{org.apache.hadoop.metrics2.source.JvmMetrics}} provides > several metrics related to GC. Unfortunately, all these metrics are not as > useful as they could be, because they don't answer the first and most > important question related to GC and JVM health: what percentage of time my > JVM is paused in GC? This percentage, calculated as the sum of the GC pauses > over some period, like 1 minute, divided by that period - is the most > convenient measure of the GC health because: > - it is just one number, and it's clear that, say, 1..5% is good, but 80..90% > is really bad > - it allows for easy apple-to-apple comparison between runs, even between > different apps > - when this metric reaches some critical value like 70%, it almost always > indicates a "GC death spiral", from which the app can recover only if it > drops some task(s) etc. > The existing "total GC time", "total number of GCs" etc. metrics only give > numbers that can be used to rougly estimate this percentage. Thus it is > suggested to add a new metric to this class, and possibly allow users to > register handlers that will be automatically invoked if this metric reaches > the specified threshold. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246422#comment-16246422 ] Hadoop QA commented on HADOOP-14128: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 102 unchanged - 0 fixed = 104 total (was 102) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 51s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 | | JIRA Issue | HADOOP-14128 | | GITHUB PR | https://github.com/apache/hadoop/pull/290 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5bae131bdb5d 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6c32dda | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/13655/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/13655/testReport/ | | Max. process+thread count | 1333 (vs. ulimit of 5000) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/13655/console | | Powered by | Apache Yetus 0.7.0-SNAPSHOT http://yetus.apache.org |
[jira] [Updated] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mathieu Chataigner updated HADOOP-14128: Affects Version/s: 2.8.1 Status: Patch Available (was: Open) > ChecksumFs should override rename with overwrite flag > - > > Key: HADOOP-14128 > URL: https://issues.apache.org/jira/browse/HADOOP-14128 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Affects Versions: 2.8.1 >Reporter: Mathieu Chataigner > Attachments: HADOOP-14128.001.patch, HADOOP-14128.002.patch > > > When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a > LocalFs (which extends ChecksumFs), it does not update crc files. > Every subsequent read on moved files will result in failures due to crc > missmatch. > One solution is to override rename(src, dst, overwrite) the same way it's > done with rename(src, dst) and moving crc files accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mathieu Chataigner updated HADOOP-14128: Status: Open (was: Patch Available) > ChecksumFs should override rename with overwrite flag > - > > Key: HADOOP-14128 > URL: https://issues.apache.org/jira/browse/HADOOP-14128 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Reporter: Mathieu Chataigner > Attachments: HADOOP-14128.001.patch, HADOOP-14128.002.patch > > > When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a > LocalFs (which extends ChecksumFs), it does not update crc files. > Every subsequent read on moved files will result in failures due to crc > missmatch. > One solution is to override rename(src, dst, overwrite) the same way it's > done with rename(src, dst) and moving crc files accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mathieu Chataigner updated HADOOP-14128: Attachment: HADOOP-14128.002.patch > ChecksumFs should override rename with overwrite flag > - > > Key: HADOOP-14128 > URL: https://issues.apache.org/jira/browse/HADOOP-14128 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Reporter: Mathieu Chataigner > Attachments: HADOOP-14128.001.patch, HADOOP-14128.002.patch > > > When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a > LocalFs (which extends ChecksumFs), it does not update crc files. > Every subsequent read on moved files will result in failures due to crc > missmatch. > One solution is to override rename(src, dst, overwrite) the same way it's > done with rename(src, dst) and moving crc files accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14976) Allow overriding HADOOP_SHELL_EXECNAME
[ https://issues.apache.org/jira/browse/HADOOP-14976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-14976: --- Attachment: HADOOP-14976.04.patch v04 patch - fixes the whitespace issue. > Allow overriding HADOOP_SHELL_EXECNAME > -- > > Key: HADOOP-14976 > URL: https://issues.apache.org/jira/browse/HADOOP-14976 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal > Attachments: HADOOP-14976.01.patch, HADOOP-14976.02.patch, > HADOOP-14976.03.patch, HADOOP-14976.04.patch > > > Some Hadoop shell scripts infer their own name using this bit of shell magic: > {code} > 18 MYNAME="${BASH_SOURCE-$0}" > 19 HADOOP_SHELL_EXECNAME="${MYNAME##*/}" > {code} > e.g. see the > [hdfs|https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs#L18] > script. > The inferred shell script name is later passed to _hadoop-functions.sh_ which > uses it to construct the names of some environment variables. E.g. when > invoking _hdfs datanode_, the options variable name is inferred as follows: > {code} > # HDFS + DATANODE + OPTS -> HDFS_DATANODE_OPTS > {code} > This works well if the calling script name is standard {{hdfs}} or {{yarn}}. > If a distribution renames the script to something like foo.bar, , then the > variable names will be inferred as {{FOO.BAR_DATANODE_OPTS}}. This is not a > valid bash variable name. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246250#comment-16246250 ] Hudson commented on HADOOP-15012: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13211 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13211/]) HADOOP-15012. Add readahead, dropbehind, and unbuffer to (jzhuge: rev bf6a660232b01642b07697a289c773ea5b97217c) * (add) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilitiesPolicy.java * (edit) hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StreamCapabilities.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/BlockBlobAppendStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSInputStream.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FSDataInputStream.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOutputStream.java > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14128) ChecksumFs should override rename with overwrite flag
[ https://issues.apache.org/jira/browse/HADOOP-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246215#comment-16246215 ] ASF GitHub Bot commented on HADOOP-14128: - GitHub user mchataigner opened a pull request: https://github.com/apache/hadoop/pull/290 HADOOP-14128. fix renameInternal in ChecksumFs AbstractFs.rename(source, destination, options) calls renameInternal(source, destination, overwrite) This patch adds this method to ChecksumFs to rename the crc file in addition to the file itself to avoid crc missmatch when use for example in LocalFs. You can merge this pull request into a Git repository by running: $ git pull https://github.com/mchataigner/hadoop fix_checksumfs Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hadoop/pull/290.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #290 commit b31cd6882ca89b18bb229ef108022026d284c70a Author: Mathieu ChataignerDate: 2017-11-09T18:23:48Z HADOOP-14128. fix renameInternal in ChecksumFs AbstractFs.rename(source, destination, options) calls renameInternal(source, destination, overwrite) This patch adds this method to ChecksumFs to rename the crc file in addition to the file itself to avoid crc missmatch when use for example in LocalFs. > ChecksumFs should override rename with overwrite flag > - > > Key: HADOOP-14128 > URL: https://issues.apache.org/jira/browse/HADOOP-14128 > Project: Hadoop Common > Issue Type: Bug > Components: common, fs >Reporter: Mathieu Chataigner > Attachments: HADOOP-14128.001.patch > > > When I call FileContext.rename(src, dst, Options.Rename.OVERWRITE) on a > LocalFs (which extends ChecksumFs), it does not update crc files. > Every subsequent read on moved files will result in failures due to crc > missmatch. > One solution is to override rename(src, dst, overwrite) the same way it's > done with rename(src, dst) and moving crc files accordingly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reassigned HADOOP-15012: --- Assignee: John Zhuge > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246225#comment-16246225 ] Andrew Wang commented on HADOOP-13363: -- Nothing has really changed in terms of the ecosystem perspective. There are other apps besides HBase that transitively acquire our PB dependency. No apps have moved over to the shaded client in Hadoop 3.0 yet. The hadoop-hdfs dependency remains problematic. Given our traumatic experience between 2.4 and 2.5, I have no reason to believe that an upgrade to 2.6 is any better than going to 3.x. > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Allen Wittenauer >Assignee: Tsuyoshi Ozawa > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-15012: Target Version/s: 3.1.0 (was: 2.10.0) > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-15012) Add readahead, dropbehind, and unbuffer to StreamCapabilities
[ https://issues.apache.org/jira/browse/HADOOP-15012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge resolved HADOOP-15012. - Resolution: Fixed Fix Version/s: 3.1.0 Committed to trunk together with HADOOP-14872. Code review was done there. {noformat} 6c32ddad302 HADOOP-14872. CryptoInputStream should implement unbuffer. Contributed by John Zhuge. bf6a660232b HADOOP-15012. Add readahead, dropbehind, and unbuffer to StreamCapabilities. Contributed by John Zhuge. {noformat} > Add readahead, dropbehind, and unbuffer to StreamCapabilities > - > > Key: HADOOP-15012 > URL: https://issues.apache.org/jira/browse/HADOOP-15012 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.9.0 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > > A split from HADOOP-14872 to track changes that enhance StreamCapabilities > class with READAHEAD, DROPBEHIND, and UNBUFFER capability. > Discussions and code reviews are done in HADOOP-14872. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Committed to trunk. Thanks [~xiaochen] and [~steve_l] for the great reviews! > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.1.0 > > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246199#comment-16246199 ] John Zhuge commented on HADOOP-14872: - Ran "test-patch" locally and got all +1s except these expected javac deprecation warnings. Committing. {noformat} [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java:[42,46] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java:[251,30] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/balancer/NameNodeConnector.java:[252,33] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java:[32,46] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java:[40,46] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java:[201,16] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStream.java:[204,16] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java:[362,25] [deprecation] StreamCapability in StreamCapabilities has been deprecated [WARNING] /home/jzhuge/hadoop/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSOutputStream.java:[364,25] [deprecation] StreamCapability in StreamCapabilities has been deprecated {noformat} > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16246154#comment-16246154 ] Dmitry Chuyko commented on HADOOP-13363: Current usage scheme leads to a small tricky problem. Consider available versions: apt-cache showpkg protobuf-compiler '2.6.1-1.3' only on Ununtu 16.04 (LTS) aarch64 '3.0.0-9ubuntu5' only on Ubuntu 17.10 amd64 Probably both versions will work when manually installed. Also if I understand it right build system docker image will need an update. Not sure about other distros and release schedules but it looks right to stick to a current version from some LTS distro of next such version. For next LTS Ubuntu it is currently also 3.0.0: '3.0.0-9ubuntu5' https://packages.ubuntu.com/search?keywords=protobuf-compiler > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Allen Wittenauer >Assignee: Tsuyoshi Ozawa > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-8555) Incorrect Kerberos configuration
[ https://issues.apache.org/jira/browse/HADOOP-8555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andras Bokor resolved HADOOP-8555. -- Resolution: Invalid This part of code has been totally changed. This is no longer a valid issue. > Incorrect Kerberos configuration > > > Key: HADOOP-8555 > URL: https://issues.apache.org/jira/browse/HADOOP-8555 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.0.0-alpha, 3.0.0-alpha1 >Reporter: Laxman > Labels: kerberos, security > > When keytab is given ticket cache should not be considered. > Following configuration tries to use ticket cache even when keytab is > configured. We need not configure ticket cache here. > org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.KerberosConfiguration.getAppConfigurationEntry(String) > {code} > options.put("keyTab", keytab); > options.put("principal", principal); > options.put("useKeyTab", "true"); > options.put("storeKey", "true"); > options.put("doNotPrompt", "true"); > options.put("useTicketCache", "true"); > options.put("renewTGT", "true"); > options.put("refreshKrb5Config", "true"); > options.put("isInitiator", "false"); > String ticketCache = System.getenv("KRB5CCNAME"); > if (ticketCache != null) { > options.put("ticketCache", ticketCache); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245733#comment-16245733 ] Tsuyoshi Ozawa commented on HADOOP-13363: - I think now we can upgrade protobuf more safely especially in Hadoop 3.0 because: 1. Now HBase uses own protobuf, 2. Hadoop 3.0 obligate the ecosystem to use shaded client. What do you think? I personally okay to upgrade the version step by step, meaning upgrading this to 2.6.x instead of 3.x, if we do this carefully. > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Allen Wittenauer >Assignee: Tsuyoshi Ozawa > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14160) Create dev-support scripts to do the bulk jira update required by the release process
[ https://issues.apache.org/jira/browse/HADOOP-14160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton resolved HADOOP-14160. --- Resolution: Won't Fix > Create dev-support scripts to do the bulk jira update required by the release > process > - > > Key: HADOOP-14160 > URL: https://issues.apache.org/jira/browse/HADOOP-14160 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Elek, Marton >Assignee: Elek, Marton > > According to the conversation on the dev mailing list one pain point of the > release making is the Jira administration. > This issue is about creating new scripts to > > * query apache issue about a possible release (remaining blocking, issues, > etc.) > * and do bulk changes (eg. bump fixVersions) -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-14162) Improve release scripts to automate missing steps
[ https://issues.apache.org/jira/browse/HADOOP-14162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Elek, Marton resolved HADOOP-14162. --- Resolution: Won't Fix > Improve release scripts to automate missing steps > - > > Key: HADOOP-14162 > URL: https://issues.apache.org/jira/browse/HADOOP-14162 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Elek, Marton >Assignee: Elek, Marton > > According to the conversation on the dev mailing list one pain point of the > release making is that even with the latest create-release script a lot of > steps are not automated. > This Jira is about creating a script which guides the release manager throw > the proces: > Goals: > * It would work even without the apache infrastructure: with custom > configuration (forked repositories/alternative nexus), it would be possible > to test the scripts even by a non-commiter. > * every step which could be automated should be scripted (create git > branches, build,...). if something could be not automated there an > explanation could be printed out, and wait for confirmation > * Before dangerous steps (eg. bulk jira update) we can ask for confirmation > and explain the > * The run should be idempontent (and there should be an option to continue > the release from any steps). -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13363) Upgrade protobuf from 2.5.0 to something newer
[ https://issues.apache.org/jira/browse/HADOOP-13363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245595#comment-16245595 ] Dmitry Chuyko commented on HADOOP-13363: [~andrew.wang] On some platforms version 2.5.0 is not even buildable, e.g. on AArch64. While for example version 2.6.1 is available in Linux distribution repositories and seems to be ok. > Upgrade protobuf from 2.5.0 to something newer > -- > > Key: HADOOP-13363 > URL: https://issues.apache.org/jira/browse/HADOOP-13363 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-alpha1, 3.0.0-alpha2 >Reporter: Allen Wittenauer >Assignee: Tsuyoshi Ozawa > Attachments: HADOOP-13363.001.patch, HADOOP-13363.002.patch, > HADOOP-13363.003.patch, HADOOP-13363.004.patch, HADOOP-13363.005.patch > > > Standard protobuf 2.5.0 does not work properly on many platforms. (See, for > example, https://gist.github.com/BennettSmith/7111094 ). In order for us to > avoid crazy work arounds in the build environment and the fact that 2.5.0 is > starting to slowly disappear as a standard install-able package for even > Linux/x86, we need to either upgrade or self bundle or something else. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15003) Merge S3A committers into trunk: Yetus patch checker
[ https://issues.apache.org/jira/browse/HADOOP-15003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16245568#comment-16245568 ] Steve Loughran commented on HADOOP-15003: - ryan: there's always been an option to opt out of success markers; {{"mapreduce.fileoutputcommitter.marksuccessfuljobs";}}, default = true. The intermediate map jobs never create them, after all. I do like having the marker there though, as the json file is great for testing. Lists the committer used, storage stats off the dest FS, and the list of files created. Makes it trivial to assert that the right committer was used (length == 0 => FileOutputCommitter), and that the right #of files were created. I'll not worry about deletion for now though, *and will document the property name* > Merge S3A committers into trunk: Yetus patch checker > > > Key: HADOOP-15003 > URL: https://issues.apache.org/jira/browse/HADOOP-15003 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.0.0 >Reporter: Steve Loughran >Assignee: Steve Loughran > Attachments: HADOOP-13786-041.patch, HADOOP-13786-042.patch, > HADOOP-13786-043.patch, HADOOP-13786-044.patch, HADOOP-13786-045.patch, > HADOOP-13786-046.patch > > > This is a Yetus only JIRA created to have Yetus review the > HADOOP-13786/HADOOP-14971 patch as a .patch file, as the review PR > [https://github.com/apache/hadoop/pull/282] is stopping this happening in > HADOOP-14971. > Reviews should go into the PR/other task -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15026) Rebase ResourceEstimator start/stop scripts for branch-2
[ https://issues.apache.org/jira/browse/HADOOP-15026?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Subru Krishnan updated HADOOP-15026: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) Thanks [~Rui Li] for the contribution, I have committed this to branch-2/2.9/2.9.0. > Rebase ResourceEstimator start/stop scripts for branch-2 > > > Key: HADOOP-15026 > URL: https://issues.apache.org/jira/browse/HADOOP-15026 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0 >Reporter: Subru Krishnan >Assignee: Rui Li > Fix For: 2.9.0 > > Attachments: HADOOP-15026-branch-2-v1.patch, > HADOOP-15026-branch-2-v2.patch, HADOOP-15026-branch-2-v3.patch > > > HADOOP-14840 introduced the {{ResourceEstimatorService}} which was > cherry-picked from trunk to branch-2. The start/stop scripts need minor > alignment with branch-2. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Status: Open (was: Patch Available) > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer
[ https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-14872: Status: Patch Available (was: Open) > CryptoInputStream should implement unbuffer > --- > > Key: HADOOP-14872 > URL: https://issues.apache.org/jira/browse/HADOOP-14872 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, security >Affects Versions: 2.6.4 >Reporter: John Zhuge >Assignee: John Zhuge > Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, > HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch, > HADOOP-14872.006.patch, HADOOP-14872.007.patch, HADOOP-14872.008.patch, > HADOOP-14872.009.patch, HADOOP-14872.010.patch, HADOOP-14872.011.patch, > HADOOP-14872.012.patch, HADOOP-14872.013.patch > > > Discovered in IMPALA-5909. > Opening an encrypted HDFS file returns a chain of wrapped input streams: > {noformat} > HdfsDataInputStream > CryptoInputStream > DFSInputStream > {noformat} > If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, > FSDataInputStream#unbuffer will be called: > {code:java} > try { > ((CanUnbuffer)in).unbuffer(); > } catch (ClassCastException e) { > throw new UnsupportedOperationException("this stream does not " + > "support unbuffering."); > } > {code} > If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If > the application is not careful, tons of UOEs will show up in logs. > In comparison, opening an non-encrypted HDFS file returns this chain: > {noformat} > HdfsDataInputStream > DFSInputStream > {noformat} > DFSInputStream implements CanUnbuffer. > It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons: > * Release buffer, cache, or any other resource when instructed > * Able to call its wrapped DFSInputStream unbuffer > * Avoid the UOE described above. Applications may not handle the UOE very > well. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org