[jira] [Commented] (HADOOP-15042) Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0

2017-11-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268632#comment-16268632
 ] 

Steve Loughran commented on HADOOP-15042:
-

+1 committed to: trunk, branch-3, branch-2
 
minor commentary

* please markup the versions which have this problem and which the patch is 
intended for.
* you can run the tests in parallel for faster test runs...look at 
hadoop-tools/hadoop-azure/src/site/markdown/testing_azure.md for the 
instructions.



> Azure PageBlobInputStream.skip() can return negative value when 
> numberOfPagesRemaining is 0
> ---
>
> Key: HADOOP-15042
> URL: https://issues.apache.org/jira/browse/HADOOP-15042
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-15042.001.patch
>
>
> {{PageBlobInputStream::skip-->skipImpl}} returns negative values when 
> {{numberOfPagesRemaining=0}}. This can cause wrong position to be set in 
> NativeAzureFileSystem::seek() and can lead to errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15042) Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0

2017-11-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268645#comment-16268645
 ] 

Hudson commented on HADOOP-15042:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13283 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13283/])
HADOOP-15042. Azure PageBlobInputStream.skip() can return negative value 
(stevel: rev 0ea182d0faa35c726dcb37249d48786bfc8ca04c)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/PageBlobInputStream.java


> Azure PageBlobInputStream.skip() can return negative value when 
> numberOfPagesRemaining is 0
> ---
>
> Key: HADOOP-15042
> URL: https://issues.apache.org/jira/browse/HADOOP-15042
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-15042.001.patch
>
>
> {{PageBlobInputStream::skip-->skipImpl}} returns negative values when 
> {{numberOfPagesRemaining=0}}. This can cause wrong position to be set in 
> NativeAzureFileSystem::seek() and can lead to errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14444) New implementation of ftp and sftp filesystems

2017-11-28 Thread Lukas Waldmann (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268529#comment-16268529
 ] 

Lukas Waldmann commented on HADOOP-1:
-

Was based on trunk at time of submit - few month ago :) I will rebase and 
submit again

@Steve: I will deal with other comment after rebasing

> New implementation of ftp and sftp filesystems
> --
>
> Key: HADOOP-1
> URL: https://issues.apache.org/jira/browse/HADOOP-1
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Lukas Waldmann
>Assignee: Lukas Waldmann
> Attachments: HADOOP-1.10.patch, HADOOP-1.2.patch, 
> HADOOP-1.3.patch, HADOOP-1.4.patch, HADOOP-1.5.patch, 
> HADOOP-1.6.patch, HADOOP-1.7.patch, HADOOP-1.8.patch, 
> HADOOP-1.9.patch, HADOOP-1.patch
>
>
> Current implementation of FTP and SFTP filesystems have severe limitations 
> and performance issues when dealing with high number of files. Mine patch 
> solve those issues and integrate both filesystems such a way that most of the 
> core functionality is common for both and therefore simplifying the 
> maintainability.
> The core features:
> * Support for HTTP/SOCKS proxies
> * Support for passive FTP
> * Support for explicit FTPS (SSL/TLS)
> * Support of connection pooling - new connection is not created for every 
> single command but reused from the pool.
> For huge number of files it shows order of magnitude performance improvement 
> over not pooled connections.
> * Caching of directory trees. For ftp you always need to list whole directory 
> whenever you ask information about particular file.
> Again for huge number of files it shows order of magnitude performance 
> improvement over not cached connections.
> * Support of keep alive (NOOP) messages to avoid connection drops
> * Support for Unix style or regexp wildcard glob - useful for listing a 
> particular files across whole directory tree
> * Support for reestablishing broken ftp data transfers - can happen 
> surprisingly often



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15042) Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0

2017-11-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15042:

Summary: Azure PageBlobInputStream.skip() can return negative value when 
numberOfPagesRemaining is 0  (was: Azure::PageBlobInputStream::skip can return 
-ve value when numberOfPagesRemaining is 0)

> Azure PageBlobInputStream.skip() can return negative value when 
> numberOfPagesRemaining is 0
> ---
>
> Key: HADOOP-15042
> URL: https://issues.apache.org/jira/browse/HADOOP-15042
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-15042.001.patch
>
>
> {{PageBlobInputStream::skip-->skipImpl}} returns negative values when 
> {{numberOfPagesRemaining=0}}. This can cause wrong position to be set in 
> NativeAzureFileSystem::seek() and can lead to errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15042) Azure PageBlobInputStream.skip() can return negative value when numberOfPagesRemaining is 0

2017-11-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15042:

Affects Version/s: 2.9.0

> Azure PageBlobInputStream.skip() can return negative value when 
> numberOfPagesRemaining is 0
> ---
>
> Key: HADOOP-15042
> URL: https://issues.apache.org/jira/browse/HADOOP-15042
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.9.0
>Reporter: Rajesh Balamohan
>Assignee: Rajesh Balamohan
>Priority: Minor
> Attachments: HADOOP-15042.001.patch
>
>
> {{PageBlobInputStream::skip-->skipImpl}} returns negative values when 
> {{numberOfPagesRemaining=0}}. This can cause wrong position to be set in 
> NativeAzureFileSystem::seek() and can lead to errors.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2017-11-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269164#comment-16269164
 ] 

Xiao Chen edited comment on HADOOP-14445 at 11/28/17 6:05 PM:
--

Thanks for the offering [~shahrs87], totally fine if you want to update the 
patch - I just want momentum on this. :)

For compatibility, I think we need:
- old client works with new format token
- old client renewer works with new token
- new client works with old format token
- new client renewer works with old format token (the RM case you commented)

This is rather exhaustive, but IMO necessary to support rolling upgrade and 
clients that talks to multiple clusters.

I'll review HDFS-12355 and related jiras.


was (Author: xiaochen):
Thanks for the offering [~shahrs87], totally fine if you want to update the 
patch - I just want momentum on this. :)

For compatibility, I think we need:
- old client works with new format token
- old client renewer works with new token (the RM case you commented)
- new client works with old format token
- new client renewer works with old format token

This is rather exhaustive, but IMO necessary to support rolling upgrade and 
clients that talks to multiple clusters.

I'll review HDFS-12355 and related jiras.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2017-11-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269174#comment-16269174
 ] 

Xiao Chen commented on HADOOP-14445:


Oops, message race with Daryn.

bq. if kp conf is set, unconditionally use it
Yeah I think this takes care of all those enumerations. :)
This way we'd require all clients to update their config (remove kp) after the 
upgrade though. Since this would be a one-off, I think it's probably fine. 
Assuming we document it clearly. Will think about this in more details during 
review...

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-28 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269021#comment-16269021
 ] 

Sean Mackrory edited comment on HADOOP-14475 at 11/28/17 4:44 PM:
--

Yeah FileSink seems to be too primitive for real-world use: I'm looking at 
RollingFileSystemSink for all but basic testing of this patch. I'm supportive 
of adding other sinks, and of using one for testing (I gave FileSink a quick 
try for that, but since it truncates the file everytime you have to tail -f it, 
or you lose most of the data), but let's maybe handle that as a distinct issue 
from hooking up S3A to metrics2 as a source?


was (Author: mackrorysd):
Yeah FileSink seems to be too primitive for real-world use: I'm looking at 
RollingFileSystemSink for all but basic testing of this patch. I'm supportive 
of adding other sinks, especially for testing (I gave FileSink a quick try for 
that, but since it truncates the file everytime you have to tail -f it, or you 
lose most of the data), but let's maybe handle that as a distinct issue from 
hooking up S3A to metrics2 as a source?

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2017-11-28 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269164#comment-16269164
 ] 

Xiao Chen commented on HADOOP-14445:


Thanks for the offering [~shahrs87], totally fine if you want to update the 
patch - I just want momentum on this. :)

For compatibility, I think we need:
- old client works with new format token
- old client renewer works with new token (the RM case you commented)
- new client works with old format token
- new client renewer works with old format token

This is rather exhaustive, but IMO necessary to support rolling upgrade and 
clients that talks to multiple clusters.

I'll review HDFS-12355 and related jiras.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-28 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268964#comment-16268964
 ] 

Sean Mackrory commented on HADOOP-14475:


Okay - sounds like we're on the same page, then - that's good!

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-28 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269021#comment-16269021
 ] 

Sean Mackrory commented on HADOOP-14475:


Yeah FileSink seems to be too primitive for real-world use: I'm looking at 
RollingFileSystemSink for all but basic testing of this patch. I'm supportive 
of adding other sinks, especially for testing (I gave FileSink a quick try for 
that, but since it truncates the file everytime you have to tail -f it, or you 
lose most of the data), but let's maybe handle that as a distinct issue from 
hooking up S3A to metrics2 as a source?

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin

2017-11-28 Thread Jeff Storck (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269059#comment-16269059
 ] 

Jeff Storck commented on HADOOP-14699:
--

Thanks, [~daryn].  I appreciate your work on HADOOP-9747!

> Impersonation errors with UGI after second principal relogin
> 
>
> Key: HADOOP-14699
> URL: https://issues.apache.org/jira/browse/HADOOP-14699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.2, 2.7.3, 2.8.1
>Reporter: Jeff Storck
>
> Multiple principals that are logged in using UGI instances that are 
> instantiated from a UGI class loaded by the same classloader will encounter 
> problems when the second principal attempts to relogin and perform an action 
> using a UGI.doAs().  An impersonation will occur and the operation attempted 
> by the second principal after relogging in will fail.  There should not be an 
> implicit attempt to impersonate the second principal through the first 
> principal that logged in.
> I have created  a GitHub project that exhibits the impersonation error with 
> brief instructions on how to set up for the test and run it: 
> https://github.com/jtstorck/ugi-test
> {noformat}18:44:55.687 [pool-2-thread-2] WARN  
> h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while 
> performing task for [ugite...@example.com (auth:KERBEROS)]
> org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not 
> allowed to impersonate ugite...@example.com
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1427)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1337)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
>   at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448)
>   at 
> hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2017-11-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269137#comment-16269137
 ] 

Rushabh S Shah commented on HADOOP-14445:
-

Hi [~xiaochen],
Thanks for taking a look at the patch.
You have summarized it pretty well.
I just have to figure out how the old tokens will work with the upgraded 
resourcemanager.
The current patch will break cancelling and renewing old tokens (ip:port 
format) with the upgraded resourcemanager.
Do you mind if I refresh the patch in few days ?
Right now my high priority work is to add support EZ for Webhdfs file system 
(HDFS-12355).
It seems the progress is stuck there in getting reviews. Since you are 
reviewing many of my EZ related patches, do you mind reviewing HDFS-12396 ?
Since that progress is stuck, I will find some cycles to update this patch.
Hope you don't mind.

> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances

2017-11-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269159#comment-16269159
 ] 

Daryn Sharp commented on HADOOP-14445:
--

The super easy solution to compatibility: if kp conf is set, unconditionally 
use it, else use the token's service to instantiate the provider.

Today the token service is irrelevant and there "can be only one provider" per 
the conf.  Let's keep that.  It's compatible for both old/new clients 
submitting jobs.

Before enabling the new behavior, leave the RM's kp conf set during the 
transition.  New clients acquire tokens with a service uri but it continues to 
be ignored by the RM.  When all clients are upgraded, remove the kp conf from 
the RM's conf.  Done.



> Delegation tokens are not shared between KMS instances
> --
>
> Key: HADOOP-14445
> URL: https://issues.apache.org/jira/browse/HADOOP-14445
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, kms
>Affects Versions: 2.8.0, 3.0.0-alpha1
>Reporter: Wei-Chiu Chuang
>Assignee: Rushabh S Shah
> Attachments: HADOOP-14445-branch-2.8.patch
>
>
> As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do 
> not share delegation tokens. (a client uses KMS address/port as the key for 
> delegation token)
> {code:title=DelegationTokenAuthenticatedURL#openConnection}
> if (!creds.getAllTokens().isEmpty()) {
> InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(),
> url.getPort());
> Text service = SecurityUtil.buildTokenService(serviceAddr);
> dToken = creds.getToken(service);
> {code}
> But KMS doc states:
> {quote}
> Delegation Tokens
> Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation 
> tokens too.
> Under HA, A KMS instance must verify the delegation token given by another 
> KMS instance, by checking the shared secret used to sign the delegation 
> token. To do this, all KMS instances must be able to retrieve the shared 
> secret from ZooKeeper.
> {quote}
> We should either update the KMS documentation, or fix this code to share 
> delegation tokens.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269224#comment-16269224
 ] 

genericqa commented on HADOOP-15059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
38s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
30s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 52s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
16s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
56s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  5s{color} | {color:orange} root: The patch generated 1 new + 364 unchanged 
- 7 fixed = 365 total (was 371) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
57s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
42s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 16m 
37s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}123m 41s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899641/HADOOP-15059.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | Linux d2dc47a813ed 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 

[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin

2017-11-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268959#comment-16268959
 ] 

Daryn Sharp commented on HADOOP-14699:
--

FYI, will soon be getting back to the UGI patch that incidentally fixes this 
issue.

> Impersonation errors with UGI after second principal relogin
> 
>
> Key: HADOOP-14699
> URL: https://issues.apache.org/jira/browse/HADOOP-14699
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 2.6.2, 2.7.3, 2.8.1
>Reporter: Jeff Storck
>
> Multiple principals that are logged in using UGI instances that are 
> instantiated from a UGI class loaded by the same classloader will encounter 
> problems when the second principal attempts to relogin and perform an action 
> using a UGI.doAs().  An impersonation will occur and the operation attempted 
> by the second principal after relogging in will fail.  There should not be an 
> implicit attempt to impersonate the second principal through the first 
> principal that logged in.
> I have created  a GitHub project that exhibits the impersonation error with 
> brief instructions on how to set up for the test and run it: 
> https://github.com/jtstorck/ugi-test
> {noformat}18:44:55.687 [pool-2-thread-2] WARN  
> h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while 
> performing task for [ugite...@example.com (auth:KERBEROS)]
> org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not 
> allowed to impersonate ugite...@example.com
>   at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1427)
>   at org.apache.hadoop.ipc.Client.call(Client.java:1337)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
>   at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
>   at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
>   at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335)
>   at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
>   at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433)
>   at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448)
>   at 
> hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
>   at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
>   at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745){noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional 

[jira] [Resolved] (HADOOP-9513) Wrong file permissions in hadoop-1.1.2-1.x86_64.rpm package

2017-11-28 Thread Javi Roman (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Javi Roman resolved HADOOP-9513.

Resolution: Won't Fix

> Wrong file permissions in hadoop-1.1.2-1.x86_64.rpm package
> ---
>
> Key: HADOOP-9513
> URL: https://issues.apache.org/jira/browse/HADOOP-9513
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 1.1.2
> Environment: Red Hat 6.x / CentOS 6.x
>Reporter: Javi Roman
>Priority: Trivial
>  Labels: hadoop
>
> Missing execution bit in shell scripts:
> /usr/sbin/start-*.sh
> /usr/sbin/stop-*.sh
> /usr/sbin/slaves.sh
> /usr/sbin/update-hadoop-env.sh
> $ ls -l /usr/sbin/start-all.sh
> -rw-r--r-- 1 root root 1166 Jan 31 03:08 /usr/sbin/start-all.sh



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2017-11-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268962#comment-16268962
 ] 

Daryn Sharp commented on HADOOP-9747:
-

I need to refresh my memory on why I thought this would break ranger's bizarre 
use of the ugi.

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk.01.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15059:

Attachment: HADOOP-15059.002.patch

Heh, clearly the container-executor changes were broken.  Attaching a new patch 
that fixes those along with changes to address the checkstyle issues.  The 
findbug issue is pre-existing and unrelated, and the unit test failure also 
appears to be unrelated.  It passes for me locally with the patch applied.

I manually tested the container-executor changes on a single-node cluster 
configured to run with LinuxContainerExecutor.

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch
>
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16268990#comment-16268990
 ] 

Steve Loughran commented on HADOOP-14475:
-

I know you two want to log for a file, but what about adding an explicit 
metrics 2 to SLF4J sink, the way codahale has.

Why? We could turn it on in our junit tests & then have them pumped out to the 
text output file which is already generated today. IF we could get them flushed 
and saved on teardown (how?) then you'd even get a summary for each test 

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-28 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-15059:

Attachment: HADOOP-15059.003.patch

One last tweak to get rid of the last checkstyle warning, although it makes the 
new line stick out like a sore thumb compared to the others.  I'd be happy to 
put the warnings back in just for the sake of code consistency if someone cares.

Filed YARN-7576 for the existing findbugs warning in trunk.


> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, 
> HADOOP-15059.003.patch
>
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in Hadoop 3, we should fix this before 
> we ship 3.0 otherwise all MR running applications will get stuck during/after 
> upgrade.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15056) Fix for TestUnbuffer#testUnbufferException Test

2017-11-28 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269484#comment-16269484
 ] 

John Zhuge commented on HADOOP-15056:
-

Just back from vacation, sorry for the delay. I will review shortly.

> Fix for TestUnbuffer#testUnbufferException Test
> ---
>
> Key: HADOOP-15056
> URL: https://issues.apache.org/jira/browse/HADOOP-15056
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
> Attachments: HADOOP-15056.001.patch
>
>
> Hello! I am a new contributor and actually contributing to open source for 
> the very first time. :) 
> I pulled down Hadoop today and when running the tests I encountered a failure 
> with the TestUnbuffer#testUnbufferException test.
> The unbuffer code has recently gone through some changes and I believe this 
> test case may have been overlooked. Using today's git commit 
> (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, 
> there is an expect mock for an exception UnsupportedOperationException that 
> is no longer being thrown. 
> It would appear that a test like this would be valuable so my initial 
> proposed patch did not remove it. Instead, I removed the conditions that were 
> guarding the cast from being able to fire -- as was the previous behavior. 
> Now when we encounter an object that doesn't have the UNBUFFERED 
> StreamCapability, it will throw an error as it did prior to the recent 
> changes. 
> Please review and let me know what you think! :D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269577#comment-16269577
 ] 

Daryn Sharp commented on HADOOP-15059:
--

*Long pondering:* The credentials format is not something that should have been 
trifled without careful compatibility considerations.  A major compatibility 
break like this should have been proceeded with a bridge release – ie. 
proactively add v0/v1 support to popular 2.x releases but continue to use the 
v0 format.  Flip the 3.x default format to v1 and everything works.  But it's 
too late.

I'm concerned the compatibility issues may not be confined to just yarn.  
Perhaps a bit contrived but I've seen enough absurd UGI/token handling that 
little ceases to amaze me anymore.
# 2.x/3.x code rewrites the creds, only updates the v0-file creds because 
that's all that existed since the dawn of security, leading to:
# If 2.x rewrites creds, 2.x subprocess works.
# If 2.x or 3.x rewrites creds, 3.x subprocess is oblivious to intended changes 
– it found the unchanged v1-file.
# If 3.x rewrites creds, 2.x subprocess blows up because the v0-file was 
incidentally changed to the v1 format.

For the sake of discussion: the ugi methods to read/write the creds would be 
the better place to transparently handle the dual-version files instead of all 
the yarn changes.  It won't avoid the aforementioned pitfalls, and it doesn't 
fix the stream methods.  Not to mention we are now stuck supporting two files.  
If the format changes again, do we have to support 3 files?  The whole point of 
writing the version into the file is to support multiple versions.
––
*Short answer:*  I think 3.0 has to be the bridge release and continue writing 
the v0 format but supporting reading v1 format.  3.1 or 3.2 can flip the 
default format to v1.  Changing the file format appears to be cosmetic so I 
don't see any harm in waiting for v1 to be the default.

> 3.0 deployment cannot work with old version MR tar ball which break rolling 
> upgrade
> ---
>
> Key: HADOOP-15059
> URL: https://issues.apache.org/jira/browse/HADOOP-15059
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Junping Du
>Assignee: Jason Lowe
>Priority: Blocker
> Attachments: HADOOP-15059.001.patch, HADOOP-15059.002.patch, 
> HADOOP-15059.003.patch
>
>
> I tried to deploy 3.0 cluster with 2.9 MR tar ball. The MR job is failed 
> because following error:
> {noformat}
> 2017-11-21 12:42:50,911 INFO [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for 
> application appattempt_1511295641738_0003_01
> 2017-11-21 12:42:51,070 WARN [main] org.apache.hadoop.util.NativeCodeLoader: 
> Unable to load native-hadoop library for your platform... using builtin-java 
> classes where applicable
> 2017-11-21 12:42:51,118 FATAL [main] 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
> java.lang.RuntimeException: Unable to determine current user
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:254)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:220)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:212)
>   at 
> org.apache.hadoop.conf.Configuration.addResource(Configuration.java:888)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1638)
> Caused by: java.io.IOException: Exception reading 
> /tmp/nm-local-dir/usercache/jdu/appcache/application_1511295641738_0003/container_e03_1511295641738_0003_01_01/container_tokens
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:208)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:820)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:689)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:252)
>   ... 4 more
> Caused by: java.io.IOException: Unknown version 1 in token storage.
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageStream(Credentials.java:226)
>   at 
> org.apache.hadoop.security.Credentials.readTokenStorageFile(Credentials.java:205)
>   ... 8 more
> 2017-11-21 12:42:51,122 INFO [main] org.apache.hadoop.util.ExitUtil: Exiting 
> with status 1: java.lang.RuntimeException: Unable to determine current user
> {noformat}
> I think it is due to token incompatiblity change between 2.9 and 3.0. As we 
> claim "rolling upgrade" is supported in 

[jira] [Updated] (HADOOP-15072) Upgrade Apache Kerby version to 1.1.0

2017-11-28 Thread Jiajia Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiajia Li updated HADOOP-15072:
---
Attachment: HADOOP-15072-001.patch

> Upgrade Apache Kerby version to 1.1.0
> -
>
> Key: HADOOP-15072
> URL: https://issues.apache.org/jira/browse/HADOOP-15072
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-15072-001.patch
>
>
> Apache Kerby 1.1.0 implements cross-realm support, and also includes a GSSAPI 
> module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15072) Upgrade Apache Kerby version to 1.1.0

2017-11-28 Thread Jiajia Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiajia Li updated HADOOP-15072:
---
Status: Patch Available  (was: Open)

> Upgrade Apache Kerby version to 1.1.0
> -
>
> Key: HADOOP-15072
> URL: https://issues.apache.org/jira/browse/HADOOP-15072
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
> Attachments: HADOOP-15072-001.patch
>
>
> Apache Kerby 1.1.0 implements cross-realm support, and also includes a GSSAPI 
> module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-11-28 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16270034#comment-16270034
 ] 

Aaron Fabbri commented on HADOOP-14475:
---

Testing v15 patch I'm seeing occasional NPEs in test shutdown hooks where stats 
are incremented:

{noformat}
2017-11-28 18:35:32,936 [JUnit] INFO  v2.MiniMRYarnCluster 
(MiniMRYarnCluster.java:serviceStart(222)) - MiniMRYARN HistoryServer address: 
fabbri-mbp.lan:57551
2017-11-28 18:35:32,936 [JUnit] INFO  v2.MiniMRYarnCluster 
(MiniMRYarnCluster.java:serviceStart(224)) - MiniMRYARN HistoryServer web 
address: fabbri-MBP.lan:57550
2017-11-28 18:35:32,945 [Thread-1372] DEBUG s3a.S3ATestUtils 
(S3ATestUtils.java:maybeEnableS3Guard(402)) - Enabling S3Guard, 
authoritative=false, 
implementation=org.apache.hadoop.fs.s3a.s3guard.LocalMetadataStore
2017-11-28 18:35:32,945 [Thread-1372] INFO  s3a.S3ATestUtils 
(S3ATestUtils.java:enableInconsistentS3Client(793)) - Enabling inconsistent S3 
client
2017-11-28 18:35:32,952 [Thread-1372] INFO  contract.AbstractFSContractTestBase 
(AbstractFSContractTestBase.java:setup(184)) - Test filesystem = 
s3a://fabbri-new implemented by S3AFileSystem{uri=s3a://fabbri-new, 
workingDir=s3a://fabbri-new/user/fabbri, inputPolicy=normal, partSize=5242880, 
enableMultiObjectsDelete=true, maxKeys=5000, readAhead=65536, 
blockSize=33554432, multiPartThreshold=5242880, 
serverSideEncryptionAlgorithm='NONE', 
blockFactory=org.apache.hadoop.fs.s3a.S3ADataBlocks$ArrayBlockFactory@11923456, 
metastore=null, authoritative=false, useListV1=false, magicCommitter=true, 
boundedExecutor=BlockingThreadPoolExecutorService{SemaphoredDelegatingExecutor{permitCount=25,
 available=25, waiting=0}, activeCount=0}, 
unboundedExecutor=java.util.concurrent.ThreadPoolExecutor@771d93da[Terminated, 
pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0], 
statistics {75761784 bytes read, 75796134 bytes written, 1197 read ops, 0 large 
read ops, 2330 write ops}}
2017-11-28 18:35:32,953 [Thread-1372] DEBUG s3a.S3AFileSystem 
(S3AFileSystem.java:innerMkdirs(1978)) - Making directory: s3a://fabbri-new/test
2017-11-28 18:35:32,953 [Thread-1372] ERROR contract.ContractTestUtils 
(ContractTestUtils.java:cleanup(376)) - Error deleting in TEARDOWN - /test: 
java.lang.NullPointerException
java.lang.NullPointerException
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1109)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.incrementStatistic(S3AFileSystem.java:1100)
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.exists(S3AFileSystem.java:2879)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.rm(ContractTestUtils.java:397)
at 
org.apache.hadoop.fs.contract.ContractTestUtils.cleanup(ContractTestUtils.java:374)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.deleteTestDirInTeardown(AbstractFSContractTestBase.java:213)
at 
org.apache.hadoop.fs.contract.AbstractFSContractTestBase.teardown(AbstractFSContractTestBase.java:204)
at 
org.apache.hadoop.fs.s3a.AbstractS3ATestBase.teardown(AbstractS3ATestBase.java:56)
at 
org.apache.hadoop.fs.s3a.commit.AbstractCommitITest.teardown(AbstractCommitITest.java:185)
at sun.reflect.GeneratedMethodAccessor16.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
{noformat}

Quick glance at code, looks like {{instrumentation}} is null.  It gets set in 
{{initialize()}} and closed / set to null in {{close()}}, so I wonder if this 
is test code using the FS after close()?  Would be easy enough to test.

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: HADOOP-14475-003.patch, HADOOP-14475.002.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, HADOOP-14475.008.patch, 
> HADOOP-14475.009.patch, HADOOP-14475.010.patch, HADOOP-14475.011.patch, 
> HADOOP-14475.012.patch, HADOOP-14475.013.patch, HADOOP-14475.014.patch, 
> HADOOP-14475.015.patch, HADOOP-14775.007.patch, failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> 

[jira] [Updated] (HADOOP-15056) Fix for TestUnbuffer#testUnbufferException Test

2017-11-28 Thread Jack Bearden (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jack Bearden updated HADOOP-15056:
--
Status: Patch Available  (was: In Progress)

> Fix for TestUnbuffer#testUnbufferException Test
> ---
>
> Key: HADOOP-15056
> URL: https://issues.apache.org/jira/browse/HADOOP-15056
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Jack Bearden
>Assignee: Jack Bearden
>Priority: Minor
> Attachments: HADOOP-15056.001.patch
>
>
> Hello! I am a new contributor and actually contributing to open source for 
> the very first time. :) 
> I pulled down Hadoop today and when running the tests I encountered a failure 
> with the TestUnbuffer#testUnbufferException test.
> The unbuffer code has recently gone through some changes and I believe this 
> test case may have been overlooked. Using today's git commit 
> (659e85e304d070f9908a96cf6a0e1cbafde6a434), and upon running the test case, 
> there is an expect mock for an exception UnsupportedOperationException that 
> is no longer being thrown. 
> It would appear that a test like this would be valuable so my initial 
> proposed patch did not remove it. Instead, I removed the conditions that were 
> guarding the cast from being able to fire -- as was the previous behavior. 
> Now when we encounter an object that doesn't have the UNBUFFERED 
> StreamCapability, it will throw an error as it did prior to the recent 
> changes. 
> Please review and let me know what you think! :D



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15059) 3.0 deployment cannot work with old version MR tar ball which break rolling upgrade

2017-11-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15059?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269500#comment-16269500
 ] 

genericqa commented on HADOOP-15059:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  9m 
35s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 51s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
17s{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api in 
trunk has 1 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} root: The patch generated 0 new + 363 unchanged - 7 
fixed = 363 total (was 370) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
9m 48s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
50s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
44s{color} | {color:green} hadoop-yarn-api in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 17m  
5s{color} | {color:green} hadoop-yarn-server-nodemanager in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}124m 13s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15059 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12899666/HADOOP-15059.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  cc  |
| uname | 

[jira] [Commented] (HADOOP-14742) Document multi-URI replication Inode for ViewFS

2017-11-28 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269902#comment-16269902
 ] 

Chris Douglas commented on HADOOP-14742:


[~jira.shegalov]/[~mingma]/[~zhz]: Is there any existing documentation we could 
use to bootstrap this? Even a page describing the basic functionality with a 
table of config parameters would be helpful to users.

> Document multi-URI replication Inode for ViewFS
> ---
>
> Key: HADOOP-14742
> URL: https://issues.apache.org/jira/browse/HADOOP-14742
> Project: Hadoop Common
>  Issue Type: Task
>  Components: documentation, viewfs
>Affects Versions: 3.0.0-beta1
>Reporter: Chris Douglas
>
> HADOOP-12077 added client-side "replication" capabilities to ViewFS. Its 
> semantics and configuration should be documented.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15072) Upgrade Apache Kerby version to 1.1.0

2017-11-28 Thread Jiajia Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jiajia Li updated HADOOP-15072:
---
Issue Type: Improvement  (was: New Feature)

> Upgrade Apache Kerby version to 1.1.0
> -
>
> Key: HADOOP-15072
> URL: https://issues.apache.org/jira/browse/HADOOP-15072
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Jiajia Li
>Assignee: Jiajia Li
>
> Apache Kerby 1.1.0 implements cross-realm support, and also includes a GSSAPI 
> module.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15072) Upgrade Apache Kerby version to 1.1.0

2017-11-28 Thread Jiajia Li (JIRA)
Jiajia Li created HADOOP-15072:
--

 Summary: Upgrade Apache Kerby version to 1.1.0
 Key: HADOOP-15072
 URL: https://issues.apache.org/jira/browse/HADOOP-15072
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Jiajia Li
Assignee: Jiajia Li


Apache Kerby 1.1.0 implements cross-realm support, and also includes a GSSAPI 
module.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15068) cancelToken and renewToken should use shortUserName consistently

2017-11-28 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269743#comment-16269743
 ] 

Daryn Sharp commented on HADOOP-15068:
--

I wanted to fix this very issue years ago but forget why it is/was tricky to 
fix than expected.  I vaguely recall an auth_to_local issue, perhaps when 
re-parsing the short name of a full principal.  Maybe it'll turn up if you make 
the proposed change and run all the security tests.  Perhaps it was in yarn.  
Be careful disturbing the sleeping security dragons.

FWIW, the inconsistencies run a bit deeper.  Cancelling a token isn't entirely 
correct either.  It checks the auth principal's short name against the renewer 
field, but the full auth principal against the owner.  So renewer is assumed 
short, owner is assumed full.  As a normal auth'ed user, I can cancel tokens 
directly acquired by me because the owner is my full principal.  Yet I can't 
cancel proxy tokens that "are me" because a proxy token's owner is the short 
name which doesn't match my full principal.  Ugh.


> cancelToken and renewToken should use shortUserName consistently
> 
>
> Key: HADOOP-15068
> URL: https://issues.apache.org/jira/browse/HADOOP-15068
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 2.8.2
>Reporter: Vihang Karajgaonkar
>
>  {{AbstractDelegationTokenSecretManager}} is used by many external projects 
> including Hive. This class provides implementations of renewToken and 
> cancelToken which are used for the delegation token management. The methods 
> are semantically inconsistent. Specifically, when you call cancelToken, the 
> string value of the canceller is used to get the Kerberos shortname and then 
> compared with the renewer value of the token to be cancelled. While in case 
> of renewToken, the string value which is passed in is used directly to 
> compare with the renewer value of the token.
> This inconsistency means that applications need to know about this subtle 
> difference and pass in the shortname while renewing the token, while it can 
> pass the full kerberos username during cancellation. Can we change the 
> renewToken method such that it uses the shortName similar to the cancelToken 
> method?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15056) Fix for TestUnbuffer#testUnbufferException Test

2017-11-28 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16269758#comment-16269758
 ] 

genericqa commented on HADOOP-15056:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 46s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green}  
8m 56s{color} | {color:green} patch has no errors when building and testing our 
client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
44s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
58s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 38s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}185m 57s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12898616/HADOOP-15056.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 76d3d321da89 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 30941d9 |
| maven | version: Apache Maven