[jira] [Resolved] (HADOOP-8640) DU thread transient failures propagate to callers

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang resolved HADOOP-8640.
-
Resolution: Won't Fix

Given that the refactor in HADOOP-12973 unintentionally eliminated this problem 
in 2.8.0 and above, I'll mark this as a won't fix.

> DU thread transient failures propagate to callers
> -
>
> Key: HADOOP-8640
> URL: https://issues.apache.org/jira/browse/HADOOP-8640
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io
>Affects Versions: 2.0.0-alpha, 1.2.1
>Reporter: Todd Lipcon
>Priority: Major
>
> When running some stress tests, I saw a failure where the DURefreshThread 
> failed due to the filesystem changing underneath it:
> {code}
> org.apache.hadoop.util.Shell$ExitCodeException: du: cannot access 
> `/data/4/dfs/dn/current/BP-1928785663-172.20.90.20-1343880685858/current/rbw/blk_4637779214690837894':
>  No such file or directory
> {code}
> (the block was probably finalized while the du process was running, which 
> caused it to fail)
> The next block write, then, called {{getUsed()}}, and the exception got 
> propagated causing the write to fail. Since it was a pseudo-distributed 
> cluster, the client was unable to pick a different node to write to and 
> failed.
> The current behavior of propagating the exception to the next (and only the 
> next) caller doesn't seem well-thought-out.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472906#comment-16472906
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/12/18 5:10 AM:
---

Thanks [~yuzhih...@gmail.com] for the pointer. I spent whole night looking at 
HBase Master log but couldn't figure out why. Then looked at RS log and 
immediately realized the class name change caused reflection error. Will 
rebuild & retest.


was (Author: jojochuang):
Thanks [~yuzhih...@gmail.com] that was really useful. I spent whole night 
looking at HBase Master log but couldn't figure out why. Then looked at RS log 
and immediately realized the class name change caused reflection error.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472906#comment-16472906
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

Thanks [~yuzhih...@gmail.com] that was really useful. I spent whole night 
looking at HBase Master log but couldn't figure out why. Then looked at RS log 
and immediately realized the class name change caused reflection error.

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472878#comment-16472878
 ] 

Ted Yu edited comment on HADOOP-10768 at 5/12/18 2:29 AM:
--

[~jojochuang]:
If you look in the hbase master log, there should be clue as to why master 
couldn't finish initialization.

Cheers


was (Author: yuzhih...@gmail.com):
[~jojochuang]:
If you look in the master log, there should be clue as to why master couldn't 
finish initialization.

Cheers

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472878#comment-16472878
 ] 

Ted Yu commented on HADOOP-10768:
-

[~jojochuang]:
If you look in the master log, there should be clue as to why master couldn't 
finish initialization.

Cheers

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472743#comment-16472743
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/12/18 1:57 AM:
---

I am reviewing this patch now, and trying to push this feature as far as 
possible, as RPC encryption performance problem is blocking some clusters that 
need to meet more stringent security compliance.

There are already excellent reviews and comments made by [~daryn], [~atm], 
[~dapengsun] so I am just trying to clear roadblocks.

rev008 still applies against trunk but does not compile due to changes in 
HDFS-13087, HDFS-12594, .. and etc.
To expedite the review process, here's rev 009 that compiles against trunk.

We are testing rev008 on a live cluster now (Hadoop 3.0.0 + HBase 2.0.0-beta1 + 
other components). So far, I found HBase2 does not compile with it, so filed 
HBASE-20572 to address that.

Protocol-wise, it looks backward compatible, which is good since we won't wait 
for Hadoop4 to include this feature.
Ran some simple tests (reading/writing files) successfully that involve mixing 
new clients with old cluster. So that verifies the ciphers are 
compatible too.

After applying the patch, rolling upgrade performed successfully with Cloudera 
Manager.
Full cluster restart performed successfully too.

More reviews to come ...

[Edit: upon further look, it looks like HBase Master failed in some really bad 
way, and it couldn't start working. Will dig into this further.]

{noformat}
hbase(main):001:0> status 'detailed'

ERROR: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing
at 
org.apache.hadoop.hbase.master.HMaster.checkInitialized(HMaster.java:2730)
at 
org.apache.hadoop.hbase.master.MasterRpcServices.getClusterStatus(MasterRpcServices.java:906)
at 
org.apache.hadoop.hbase.shaded.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:404)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
{noformat}


was (Author: jojochuang):
I am reviewing this patch now, and trying to push this feature as far as 
possible, as RPC encryption performance problem is blocking some clusters that 
need to meet more stringent security compliance.

There are already excellent reviews and comments made by [~daryn], [~atm], 
[~dapengsun] so I am just trying to clear roadblocks.

rev008 still applies against trunk but does not compile due to changes in 
HDFS-13087, HDFS-12594, .. and etc.
To expedite the review process, here's rev 009 that compiles against trunk.

We are testing rev008 on a live cluster now (Hadoop 3.0.0 + HBase 2.0.0-beta1 + 
other components). So far, I found HBase2 does not compile with it, so filed 
HBASE-20572 to address that.

Protocol-wise, it looks backward compatible, which is good since we won't wait 
for Hadoop4 to include this feature.
Ran some simple tests (reading/writing files) successfully that involve mixing 
new clients with old cluster. So that verifies the ciphers are 
compatible too.

After applying the patch, rolling upgrade performed successfully with Cloudera 
Manager.
Full cluster restart performed successfully too.

More reviews to come ...

[Edit: upon further look, it looks like HBase Master failed in some really bad 
way, and it couldn't start working. Will dig into this further.]

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is 

[jira] [Comment Edited] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472743#comment-16472743
 ] 

Wei-Chiu Chuang edited comment on HADOOP-10768 at 5/12/18 1:17 AM:
---

I am reviewing this patch now, and trying to push this feature as far as 
possible, as RPC encryption performance problem is blocking some clusters that 
need to meet more stringent security compliance.

There are already excellent reviews and comments made by [~daryn], [~atm], 
[~dapengsun] so I am just trying to clear roadblocks.

rev008 still applies against trunk but does not compile due to changes in 
HDFS-13087, HDFS-12594, .. and etc.
To expedite the review process, here's rev 009 that compiles against trunk.

We are testing rev008 on a live cluster now (Hadoop 3.0.0 + HBase 2.0.0-beta1 + 
other components). So far, I found HBase2 does not compile with it, so filed 
HBASE-20572 to address that.

Protocol-wise, it looks backward compatible, which is good since we won't wait 
for Hadoop4 to include this feature.
Ran some simple tests (reading/writing files) successfully that involve mixing 
new clients with old cluster. So that verifies the ciphers are 
compatible too.

After applying the patch, rolling upgrade performed successfully with Cloudera 
Manager.
Full cluster restart performed successfully too.

More reviews to come ...

[Edit: upon further look, it looks like HBase Master failed in some really bad 
way, and it couldn't start working. Will dig into this further.]


was (Author: jojochuang):
I am reviewing this patch now, and trying to push this feature as far as 
possible, as RPC encryption performance problem is blocking some clusters that 
need to meet more stringent security compliance.

There are already excellent reviews and comments made by [~daryn], [~atm], 
[~dapengsun] so I am just trying to clear roadblocks.

rev008 still applies against trunk but does not compile due to changes in 
HDFS-13087, HDFS-12594, .. and etc.
To expedite the review process, here's rev 009 that compiles against trunk.

We are testing rev008 on a live cluster now (Hadoop 3.0.0 + HBase 2.0.0-beta1 + 
other components). So far, I found HBase2 does not compile with it, so filed 
HBASE-20572 to address that.

Protocol-wise, it looks backward compatible, which is good since we won't wait 
for Hadoop4 to include this feature.
Ran some simple tests (reading/writing files) successfully that involve mixing 
new clients with old cluster. So that verifies the ciphers are 
compatible too.

After applying the patch, rolling upgrade performed successfully with Cloudera 
Manager.
Full cluster restart performed successfully too.

More reviews to come ...

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472845#comment-16472845
 ] 

genericqa commented on HADOOP-10768:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
24s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 26m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
55s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
4m 36s{color} | {color:orange} root: The patch generated 25 new + 905 unchanged 
- 14 fixed = 930 total (was 919) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  6s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
35s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs-client generated 55 
new + 0 unchanged - 0 fixed = 55 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
54s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 8 new + 1 
unchanged - 0 fixed = 9 total (was 1) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
10s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 40s{color} 
| {color:red} hadoop-hdfs-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 42s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}147m 24s{color} | 

[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472743#comment-16472743
 ] 

Wei-Chiu Chuang commented on HADOOP-10768:
--

I am reviewing this patch now, and trying to push this feature as far as 
possible, as RPC encryption performance problem is blocking some clusters that 
need to meet more stringent security compliance.

There are already excellent reviews and comments made by [~daryn], [~atm], 
[~dapengsun] so I am just trying to clear roadblocks.

rev008 still applies against trunk but does not compile due to changes in 
HDFS-13087, HDFS-12594, .. and etc.
To expedite the review process, here's rev 009 that compiles against trunk.

We are testing rev008 on a live cluster now (Hadoop 3.0.0 + HBase 2.0.0-beta1 + 
other components). So far, I found HBase2 does not compile with it, so filed 
HBASE-20572 to address that.

Protocol-wise, it looks backward compatible, which is good since we won't wait 
for Hadoop4 to include this feature.
Ran some simple tests (reading/writing files) successfully that involve mixing 
new clients with old cluster. So that verifies the ciphers are 
compatible too.

After applying the patch, rolling upgrade performed successfully with Cloudera 
Manager.
Full cluster restart performed successfully too.

More reviews to come ...

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2018-05-11 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-10768:
-
Attachment: HADOOP-10768.009.patch

> Optimize Hadoop RPC encryption performance
> --
>
> Key: HADOOP-10768
> URL: https://issues.apache.org/jira/browse/HADOOP-10768
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: performance, security
>Affects Versions: 3.0.0-alpha1
>Reporter: Yi Liu
>Assignee: Dapeng Sun
>Priority: Major
> Attachments: HADOOP-10768.001.patch, HADOOP-10768.002.patch, 
> HADOOP-10768.003.patch, HADOOP-10768.004.patch, HADOOP-10768.005.patch, 
> HADOOP-10768.006.patch, HADOOP-10768.007.patch, HADOOP-10768.008.patch, 
> HADOOP-10768.009.patch, Optimize Hadoop RPC encryption performance.pdf
>
>
> Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
> "privacy". It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
> secure authentication and data protection. Even {{GSSAPI}} supports using 
> AES, but without AES-NI support by default, so the encryption is slow and 
> will become bottleneck.
> After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
> same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
> On the other hand, RPC message is small, but RPC is frequent and there may be 
> lots of RPC calls in one connection, we needs to setup benchmark to see real 
> improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472496#comment-16472496
 ] 

Szilard Nemeth commented on HADOOP-15457:
-

[~kanwaljeets]
1. Sure, I see why you left the other introduced constants as package-private, 
to be able to access them from the tests.
2. Yes, the httpHeaderRegex could be also private, I missed that. Oh I see you 
really have something else with the same name (regex string). I think the 
cleanest would be to differentiate them, maybe using the "pattern" prefix for 
the pattern static field.
I checked some other occurences in the code for some static Patterns, most of 
them are with uppercase letters, so I would vote for that.

Thanks!

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472469#comment-16472469
 ] 

Kanwaljeet Sachdev commented on HADOOP-15457:
-

[~snemeth]
 # Sure, will make it private. Most other introduced constants are in use in 
Test file so they were left as non-private. We currently do not have this in 
the test so can be made private
 # I will also make this private. However, there is already a constant by the 
name you suggest. And also, since this gets compiled, I was treating it more 
like a private data member but again it is static final and almost like a 
constant, so I can make it follow the convention with a slightly different name 
if you want so. Let me know and I will change it accordingly and upload a new 
patch

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Szilard Nemeth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472455#comment-16472455
 ] 

Szilard Nemeth commented on HADOOP-15457:
-

Hey [~kanwaljeets]!

Couple of minor things I noticed: 

1. HttpServer2.X_FRAME_OPTIONS could be private
2. HttpServer2.httpHeaderRegex's name should be HTTP_HEADER_REGEX as it is a 
constant regex pattern.

Apart from these, it looks good.
Thanks

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Kanwaljeet Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanwaljeet Sachdev reassigned HADOOP-15457:
---

Assignee: Kanwaljeet Sachdev

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Assignee: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-11 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472242#comment-16472242
 ] 

Takanobu Asanuma edited comment on HADOOP-10783 at 5/11/18 4:51 PM:


Uploaded the 2nd patch. It completely upgrades commons-lang from 2 to 3.

{noformat}
find . -name "*.java" | xargs grep "commons.lang" | grep -v lang3 | wc -l
   0
{noformat}


was (Author: tasanuma0829):
Uploaded the 2nd patch. It completely upgrades commons-lang from 2 to 3.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-11 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-10783:
--
Affects Version/s: (was: 2.4.1)
 Target Version/s: 3.2.0
   Status: Patch Available  (was: Open)

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-11 Thread Takanobu Asanuma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472242#comment-16472242
 ] 

Takanobu Asanuma commented on HADOOP-10783:
---

Uploaded the 2nd patch. It completely upgrades commons-lang from 2 to 3.

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10783) apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed

2018-05-11 Thread Takanobu Asanuma (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takanobu Asanuma updated HADOOP-10783:
--
Attachment: HADOOP-10783.2.patch

> apache-commons-lang.jar 2.6 does not support FreeBSD -upgrade to 3.x needed
> ---
>
> Key: HADOOP-10783
> URL: https://issues.apache.org/jira/browse/HADOOP-10783
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1
>Reporter: Dmitry Sivachenko
>Assignee: Takanobu Asanuma
>Priority: Major
> Attachments: HADOOP-10783.2.patch, commons-lang3_1.patch
>
>
> Hadoop-2.4.1 ships with apache-commons.jar version 2.6.
> It does not support FreeBSD (IS_OS_UNIX returns False).
> This is fixed in recent versions of apache-commons.jar
> Please update apache-commons.jar to recent version so it correctly recognizes 
> FreeBSD as UNIX-like system.
> Right now I get in datanode's log:
> 2014-07-04 11:58:10,459 DEBUG 
> org.apache.hadoop.hdfs.server.datanode.ShortCircui
> tRegistry: Disabling ShortCircuitRegistry
> java.io.IOException: The OS is not UNIX.
> at 
> org.apache.hadoop.io.nativeio.SharedFileDescriptorFactory.create(SharedFileDescriptorFactory.java:77)
> at 
> org.apache.hadoop.hdfs.server.datanode.ShortCircuitRegistry.(ShortCircuitRegistry.java:169)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:583)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:771)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:289)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1931)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1818)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1865)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2041)
> at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2065)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472234#comment-16472234
 ] 

Íñigo Goiri commented on HADOOP-15458:
--

I think the proper fix would be to remove that file at the end, which actually 
seems to be happening:
{code}
  @Before
  public void setup() throws IOException {
conf = new Configuration(false);
conf.set("fs.file.impl", LocalFileSystem.class.getName());
fileSys = FileSystem.getLocal(conf);
fileSys.delete(new Path(TEST_ROOT_DIR), true);
  }
  
  @After
  public void after() throws IOException {
FileUtil.setWritable(base, true);
FileUtil.fullyDelete(base);
assertTrue(!base.exists());
RawLocalFileSystem.useStatIfAvailable();
  }
{code}

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1647#comment-1647
 ] 

Íñigo Goiri edited comment on HADOOP-15458 at 5/11/18 4:37 PM:
---

It looks like Yetus was able to run it succesfully for Linux 
[here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/testReport/org.apache.hadoop.fs/TestLocalFileSystem/].

However, it seems that without calling build(), we won't be taking any action.
Can you detail how this would work?


was (Author: elgoiri):
It looks like Yetus was able to run it succesfully for Linux 
[here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/testReport/org.apache.hadoop.fs/TestLocalFileSystem/].
The warnings seem errors in the Yetus run.
+1
Committing all the way to branch-2.9.

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1647#comment-1647
 ] 

Íñigo Goiri commented on HADOOP-15458:
--

It looks like Yetus was able to run it succesfully for Linux 
[here|https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/testReport/org.apache.hadoop.fs/TestLocalFileSystem/].
The warnings seem errors in the Yetus run.
+1
Committing all the way to branch-2.9.

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15441) After HADOOP-14987, encryption zone operations print unnecessary INFO logs

2018-05-11 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16472048#comment-16472048
 ] 

Rushabh S Shah commented on HADOOP-15441:
-

[~gabor.bota]: The patch doesn't apply cleanly after revert of hadoop-14445.
Can you please rebase the patch.

> After HADOOP-14987, encryption zone operations print unnecessary INFO logs
> --
>
> Key: HADOOP-15441
> URL: https://issues.apache.org/jira/browse/HADOOP-15441
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Gabor Bota
>Priority: Minor
> Attachments: HADOOP-15441.001.patch, HADOOP-15441.002.patch
>
>
> It looks like after HADOOP-14987, any encryption zone operations prints extra 
> INFO log messages as follows:
> {code:java}
> $ hdfs dfs -copyFromLocal /etc/krb5.conf /scale/
> 18/05/02 11:54:55 INFO kms.KMSClientProvider: KMSClientProvider for KMS url: 
> https://hadoop3-1.example.com:16000/kms/v1/ delegation token service: 
> kms://ht...@hadoop3-1.example.com:16000/kms created.
> {code}
> It might make sense to make it a DEBUG message instead.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471899#comment-16471899
 ] 

genericqa commented on HADOOP-15458:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
27s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 50m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  0m 
41s{color} | {color:red} The patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 96m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:f667ef1 |
| JIRA Issue | HADOOP-15458 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12923012/HADOOP-15458-branch-2.000.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux ebe5c113e6a3 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 11794e5 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_171 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1575 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14614/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  

[jira] [Commented] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471797#comment-16471797
 ] 

genericqa commented on HADOOP-15449:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
 0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 26m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 26m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 23s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}120m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15449 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922997/HADOOP-15449-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 2af2ac336b61 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a922b9c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14613/testReport/ |
| Max. process+thread count | 1480 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Updated] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15458:

Attachment: HADOOP-15458-branch-2.000.patch

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15458:

Status: Patch Available  (was: Open)

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458-branch-2.000.patch, HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471775#comment-16471775
 ] 

Xiao Liang commented on HADOOP-15458:
-

The error message for this test case on Windows is like:

{color:#d04437}2018-05-11 04:42:08,909 WARN fs.FileUtil: Failed to delete file 
or dir 
[D:\Git\Hadoop\hadoop-common-project\hadoop-common\target\test\data\work-dir\localfs\testBuilder]:
 it still exists.{color}

{color:#d04437}java.lang.AssertionError{color}
{color:#d04437} at org.junit.Assert.fail(Assert.java:86){color}
{color:#d04437} at org.junit.Assert.assertTrue(Assert.java:41){color}
{color:#d04437} at org.junit.Assert.assertTrue(Assert.java:52){color}
{color:#d04437} at 
org.apache.hadoop.fs.TestLocalFileSystem.after(TestLocalFileSystem.java:99){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

With the patch, it's passed.

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471775#comment-16471775
 ] 

Xiao Liang edited comment on HADOOP-15458 at 5/11/18 11:44 AM:
---

The error message for this test case on Windows is like:

{color:#d04437}2018-05-11 04:42:08,909 WARN fs.FileUtil: Failed to delete file 
or dir 
[D:\Git\Hadoop\hadoop-common-project\hadoop-common\target\test\data\work-dir\localfs\testBuilder]:
 it still exists.{color}

{color:#d04437}java.lang.AssertionError{color}
 {color:#d04437} at org.junit.Assert.fail(Assert.java:86){color}
 {color:#d04437} at org.junit.Assert.assertTrue(Assert.java:41){color}
 {color:#d04437} at org.junit.Assert.assertTrue(Assert.java:52){color}
 {color:#d04437} at 
org.apache.hadoop.fs.TestLocalFileSystem.after(TestLocalFileSystem.java:99){color}
 {color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
 {color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
 {color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
 {color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
 {color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
 {color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
 {color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
 {color:#d04437} at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33){color}
 {color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

With [^HADOOP-15458.000.patch], it's passed.


was (Author: surmountian):
The error message for this test case on Windows is like:

{color:#d04437}2018-05-11 04:42:08,909 WARN fs.FileUtil: Failed to delete file 
or dir 
[D:\Git\Hadoop\hadoop-common-project\hadoop-common\target\test\data\work-dir\localfs\testBuilder]:
 it still exists.{color}

{color:#d04437}java.lang.AssertionError{color}
{color:#d04437} at org.junit.Assert.fail(Assert.java:86){color}
{color:#d04437} at org.junit.Assert.assertTrue(Assert.java:41){color}
{color:#d04437} at org.junit.Assert.assertTrue(Assert.java:52){color}
{color:#d04437} at 
org.apache.hadoop.fs.TestLocalFileSystem.after(TestLocalFileSystem.java:99){color}
{color:#d04437} at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method){color}
{color:#d04437} at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62){color}
{color:#d04437} at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43){color}
{color:#d04437} at java.lang.reflect.Method.invoke(Method.java:498){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47){color}
{color:#d04437} at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12){color}
{color:#d04437} at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44){color}
{color:#d04437} at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:33){color}
{color:#d04437} at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74){color}

With the patch, it's passed.

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15458:

Attachment: HADOOP-15458.000.patch

> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
> Attachments: HADOOP-15458.000.patch
>
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Liang updated HADOOP-15458:

Description: 
In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
FSDataOutputStream object is unnecessarily created and not closed, which makes 
org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the folder on 
Windows.

 

  was:
In org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder a 

FSDataOutputStream object is unnecessarily created and not closed, which makes 
org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the folder on 
Windows.


> org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
> Windows
> ---
>
> Key: HADOOP-15458
> URL: https://issues.apache.org/jira/browse/HADOOP-15458
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Xiao Liang
>Assignee: Xiao Liang
>Priority: Major
>  Labels: windows
>
> In *org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder* a 
> FSDataOutputStream object is unnecessarily created and not closed, which 
> makes org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the 
> folder on Windows.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15458) org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on Windows

2018-05-11 Thread Xiao Liang (JIRA)
Xiao Liang created HADOOP-15458:
---

 Summary: 
org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder fails on 
Windows
 Key: HADOOP-15458
 URL: https://issues.apache.org/jira/browse/HADOOP-15458
 Project: Hadoop Common
  Issue Type: Test
Reporter: Xiao Liang
Assignee: Xiao Liang


In org.apache.hadoop.fs.TestLocalFileSystem#testFSOutputStreamBuilder a 

FSDataOutputStream object is unnecessarily created and not closed, which makes 
org.apache.hadoop.fs.TestLocalFileSystem#after fails to delete the folder on 
Windows.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread Karthik Palanisamy (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471701#comment-16471701
 ] 

Karthik Palanisamy commented on HADOOP-15449:
-

Thank you [~ajisakaa]. Done the change.

> ZK performance issues causing frequent Namenode failover 
> -
>
> Key: HADOOP-15449
> URL: https://issues.apache.org/jira/browse/HADOOP-15449
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Critical
> Attachments: HADOOP-15449-002.patch, HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 1 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread Karthik Palanisamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Palanisamy updated HADOOP-15449:

Attachment: HADOOP-15449-002.patch

> ZK performance issues causing frequent Namenode failover 
> -
>
> Key: HADOOP-15449
> URL: https://issues.apache.org/jira/browse/HADOOP-15449
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Critical
> Attachments: HADOOP-15449-002.patch, HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 1 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471672#comment-16471672
 ] 

genericqa commented on HADOOP-15457:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
36s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 10s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m  
0s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 41 new + 91 unchanged - 3 fixed = 132 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m  9s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 13s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:abb62dd |
| JIRA Issue | HADOOP-15457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12922963/HADOOP-15457.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 0d80d3260d88 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / a922b9c |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_162 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14612/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14612/testReport/ |
| Max. process+thread count | 1716 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14612/console |
| Powered by | Apache Yetus 

[jira] [Commented] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-11 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471670#comment-16471670
 ] 

Elek, Marton commented on HADOOP-15456:
---

Thank you [~ajayydv] to work on this. I think it's a very valueable change and 
it will be usefull not just for HDDS-10 but to test any security related 
HDFS/YARN issue from docker.

Therefore I suggest to updated the apache/hadoop-runner image instead of 
creating a new one. The source of that image is on docker-hadoop-runner branch. 
I just created a diff based on your tar and uploaded it to this issue.

Some small comments:

1. As I see the only non compatible change between the existing 
apache/hadoop-runner and your base image is that you removed the 'USER hadoop'. 
Is there any reason for that?

2. There are some commented out code in the starter.sh. (eg. keystore 
download). If we don't need the wire encryptiom yet, we can simply just remove 
those lines. Also there are other disabled lines (sleep, volume permission 
fix). I am just wondering if they ara intentional

3. You have a loop to wait for the KDC server. I really like it as it makes it 
more safe to start the kerberized containers. Just two note: The loop should be 
executed IMHO only if KERBEROS SERVER is set. + You can add the 'KDC' word to 
the print out in the else case to make it easier to understand that we are 
waiting for the KDC...

4. If it will be a shared runner image for both hadoop/hdds/hdfs/yarn, the 
readme should be adjusted a little.


> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15456) create base image for running secure ozone cluster

2018-05-11 Thread Elek, Marton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elek, Marton updated HADOOP-15456:
--
Attachment: HADOOP-15456-docker-hadoop-runner.001.patch

> create base image for running secure ozone cluster
> --
>
> Key: HADOOP-15456
> URL: https://issues.apache.org/jira/browse/HADOOP-15456
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ajay Kumar
>Assignee: Ajay Kumar
>Priority: Major
> Attachments: HADOOP-15456-docker-hadoop-runner.001.patch, 
> secure-ozone.tar
>
>
> Create docker image to run secure ozone cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471609#comment-16471609
 ] 

Akira Ajisaka commented on HADOOP-15449:


Would you update the value as well?
{code:title=ZKFailoverController.java}
  private static final int ZK_SESSION_TIMEOUT_DEFAULT = 5*1000;
{code}

> ZK performance issues causing frequent Namenode failover 
> -
>
> Key: HADOOP-15449
> URL: https://issues.apache.org/jira/browse/HADOOP-15449
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Critical
> Attachments: HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 1 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471606#comment-16471606
 ] 

Akira Ajisaka edited comment on HADOOP-15449 at 5/11/18 7:42 AM:
-

+1 for extending the timeout. We observed the same issue.


was (Author: ajisakaa):
+1. We observed the same issue.

> ZK performance issues causing frequent Namenode failover 
> -
>
> Key: HADOOP-15449
> URL: https://issues.apache.org/jira/browse/HADOOP-15449
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Critical
> Attachments: HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 1 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15449) ZK performance issues causing frequent Namenode failover

2018-05-11 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471606#comment-16471606
 ] 

Akira Ajisaka commented on HADOOP-15449:


+1. We observed the same issue.

> ZK performance issues causing frequent Namenode failover 
> -
>
> Key: HADOOP-15449
> URL: https://issues.apache.org/jira/browse/HADOOP-15449
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: common
>Affects Versions: 2.7.4
>Reporter: Karthik Palanisamy
>Assignee: Karthik Palanisamy
>Priority: Critical
> Attachments: HADOOP-15449.patch
>
>
> We observed from several users regarding Namenode flip-over is due to either 
> zookeeper disk slowness (higher fsync cost) or network issue. We would need 
> to avoid flip-over issue to some extent by increasing HA session timeout, 
> ha.zookeeper.session-timeout.ms.
> Default value is 5000 ms, seems very low in any production environment.  I 
> would suggest 1 ms as default session timeout.
>  
> {code}
> ..
> 2018-05-04 03:54:36,848 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 4689ms for sessionid 0x260e24bac500aa3, closing socket connection 
> and attempting reconnect 
> 2018-05-04 03:56:49,088 INFO  zookeeper.ClientCnxn 
> (ClientCnxn.java:run(1140)) - Client session timed out, have not heard from 
> server in 3981ms for sessionid 0x360fd152b8700fe, closing socket connection 
> and attempting reconnect
> .. 
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Kanwaljeet Sachdev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kanwaljeet Sachdev updated HADOOP-15457:

Attachment: HADOOP-15457.001.patch

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15457) Add Security-Related HTTP Response Header in Yarn WEBUIs.

2018-05-11 Thread Kanwaljeet Sachdev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16471515#comment-16471515
 ] 

Kanwaljeet Sachdev commented on HADOOP-15457:
-

# Changed the component.
 # Made it static
 # Removed the unused group.
 # Changed it to matches
 # There is some historical context here where I found that HDFS-10579 added 
over-ride options. The new mechanism if leveraged to use over-rides might break 
upgrade in some scenarios. To minimize the impact, I moved the addition of 
header for xFrame into the newly added code but decided to keep the params that 
come for this header the original way.
 # Fixed it.

> Add Security-Related HTTP Response Header in Yarn WEBUIs.
> -
>
> Key: HADOOP-15457
> URL: https://issues.apache.org/jira/browse/HADOOP-15457
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Kanwaljeet Sachdev
>Priority: Major
>  Labels: security
> Attachments: HADOOP-15457.001.patch, YARN-8198.001.patch, 
> YARN-8198.002.patch, YARN-8198.003.patch, YARN-8198.004.patch, 
> YARN-8198.005.patch
>
>
> As of today, YARN web-ui lacks certain security related http response 
> headers. We are planning to add few default ones and also add support for 
> headers to be able to get added via xml config. Planning to make the below 
> two as default.
>  * X-XSS-Protection: 1; mode=block
>  * X-Content-Type-Options: nosniff
>  
> Support for headers via config properties in core-site.xml will be along the 
> below lines
> {code:java}
> 
>  hadoop.http.header.Strict_Transport_Security
>  valHSTSFromXML
>  {code}
>  
> A regex matcher will lift these properties and add into the response header 
> when Jetty prepares the response.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org