[jira] [Reopened] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S reopened HADOOP-12687:


Reverted the issue commit, and reopening the issue. 

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088944#comment-15088944
 ] 

Rohith Sharma K S commented on HADOOP-12687:


All the VM's machine should contains "." at the end of hostname in /etc/hosts 
file. I verified tests cases by adding dot "." and all tests are passing. I 
think need to raise INFRA jira for changing hostname in VM's.

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088947#comment-15088947
 ] 

Hudson commented on HADOOP-12687:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9072 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9072/])
Revert "HADOOP-12687. SecureUtil#QualifiedHostResolver#getByName should 
(rohithsharmaks: rev ed18527e38438113fdf2f48b08be5ec283a5f481)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12687:
---
Fix Version/s: (was: 2.9.0)

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12687:
---
Hadoop Flags:   (was: Reviewed)

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089704#comment-15089704
 ] 

Steve Loughran commented on HADOOP-12622:
-

Looks OK. If there's one thing I don't like, it's that you have strings in the 
policies that you have copied and pasted into the tests for validation. I'd 
prefer you used constant strings in the production code and used the same 
strings in the tests. Why? Stops the tests failing if you ever change the 
message.

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> 

[jira] [Commented] (HADOOP-12697) IPC retry policies should recognise that SASL auth failures are unrecoverable

2016-01-08 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089717#comment-15089717
 ] 

Sergey Shelukhin commented on HADOOP-12697:
---

The retries in this case were per-exception (see YARN RMProxy), but they don't 
specify SaslException/GSSException anywhere. I am not sure if this is an issue 
with retry policy setup in YARN, or in ipc.Client.

> IPC retry policies should recognise that SASL auth failures are unrecoverable
> -
>
> Key: HADOOP-12697
> URL: https://issues.apache.org/jira/browse/HADOOP-12697
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
> Environment: Cluster with kerberos on and client not calling with the 
> right credentials
>Reporter: Steve Loughran
>Priority: Minor
>
> SLIDER-1050 shows that if you don't have the right kerberos settings, the 
> Yarn client IPC channel blocks retrying to the talk to the RM, retrying 
> repeatedly
> {noformat}
> 2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
> connecting to the server :
>  javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException:
>  No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> {noformat}
> SASL exceptions need to be recognised as irreconcilable authentication 
> failures, rather than generic IOEs that might go away if you retry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12651) Replace dev-support with wrappers to Yetus

2016-01-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12651:
--
Release Note: 

* releasedocmaker.py is now dev-support/bin/releasedocmaker
* shelldocs.py is now dev-support/bin/shelldocs
* smart-apply-patch.sh is now dev-support/bin/smart-apply-patch
* test-patch.sh is now dev-support/bin/test-patch
* Setting YETUS_HOME to a previously installed version of Apache Yetus will use 
that version rather than downloading one

  was:

* releasedocmaker.py is now dev-support/bin/releasedocmaker
* shelldocs.py is now dev-support/bin/shelldocs
* smart-apply-patch.sh is now dev-support/bin/smart-apply-patch
* test-patch.sh is now dev-support/bin/test-patch



> Replace dev-support with wrappers to Yetus
> --
>
> Key: HADOOP-12651
> URL: https://issues.apache.org/jira/browse/HADOOP-12651
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12651.00.patch, HADOOP-12651.01.patch, 
> HADOOP-12651.02.patch
>
>
> Now that Yetus has had a release, we should rip out the components that make 
> it up from dev-support and replace them with wrappers.  The wrappers should:
> * default to a sane version
> * allow for version overrides via an env var
> * download into patchprocess
> * execute with the given parameters
> Marking this as an incompatible change, since we should also remove the 
> filename extensions and move these into a bin directory for better 
> maintenance towards the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12651) Replace dev-support with wrappers to Yetus

2016-01-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12651:
--
Attachment: HADOOP-12651.02.patch

-02:
* white space issues fixed
* add ability to use YETUS_HOME/bin/x
* add some short cutting when already downloaded

> Replace dev-support with wrappers to Yetus
> --
>
> Key: HADOOP-12651
> URL: https://issues.apache.org/jira/browse/HADOOP-12651
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12651.00.patch, HADOOP-12651.01.patch, 
> HADOOP-12651.02.patch
>
>
> Now that Yetus has had a release, we should rip out the components that make 
> it up from dev-support and replace them with wrappers.  The wrappers should:
> * default to a sane version
> * allow for version overrides via an env var
> * download into patchprocess
> * execute with the given parameters
> Marking this as an incompatible change, since we should also remove the 
> filename extensions and move these into a bin directory for better 
> maintenance towards the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12689) S3 filesystem operations stopped working correctly

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089708#comment-15089708
 ] 

Steve Loughran commented on HADOOP-12689:
-

removing -1, as yes, there are tests now. I just didn't want those tests to get 
forgotten about.

> S3 filesystem operations stopped working correctly
> --
>
> Key: HADOOP-12689
> URL: https://issues.apache.org/jira/browse/HADOOP-12689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12689.01.patch
>
>
> HADOOP-10542 was resolved by replacing "return null;" with throwing  
> IOException.   This causes several S3 filesystem operations to fail (possibly 
> more code is expecting that null return value; these are just the calls I 
> noticed):
> S3FileSystem.getFileStatus() (which no longer raises FileNotFoundException 
> but instead IOException)
> FileSystem.exists() (which no longer returns false but instead raises 
> IOException)
> S3FileSystem.create() (which no longer succeeds but instead raises 
> IOException)
> Run command:
> hadoop distcp hdfs://localhost:9000/test s3://xxx:y...@com.bar.foo/
> Resulting stack trace:
> 2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1449826461866_0005_m_06_0 - exited : java.io.IOException: /test 
> doesn't exist
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
> at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> changing the "raise IOE..." to "return null" fixes all of the above code 
> sites and allows distcp to succeed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12101) Add automatic search of default Configuration variables to TestConfigurationFieldsBase

2016-01-08 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089699#comment-15089699
 ] 

Ray Chiang commented on HADOOP-12101:
-

RE: Failing unit tests with JDK8

Both tests pass in my tree using JDK8.

> Add automatic search of default Configuration variables to 
> TestConfigurationFieldsBase
> --
>
> Key: HADOOP-12101
> URL: https://issues.apache.org/jira/browse/HADOOP-12101
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.0
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>  Labels: supportability
> Attachments: HADOOP-12101.001.patch, HADOOP-12101.002.patch, 
> HADOOP-12101.003.patch, HADOOP-12101.004.patch, HADOOP-12101.005.patch, 
> HADOOP-12101.006.patch, HADOOP-12101.007.patch
>
>
> Add functionality given a Configuration variable FOO, to at least check the 
> xml file value against DEFAULT_FOO.
> Without waivers and a mapping for exceptions, this can probably never be a 
> test method that generates actual errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089852#comment-15089852
 ] 

Ravi Prakash commented on HADOOP-12696:
---

Hi Matt!
Thanks for the patch. You seem to have copied s3.xml from hdfs.xml. I wonder if 
s3n.xml is not a better XML to copy. e.g. is-blobstore should probably be true 
for S3 too, shouldn't it?


> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, log.fail2, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12689) S3 filesystem operations stopped working correctly

2016-01-08 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089792#comment-15089792
 ] 

Ravi Prakash commented on HADOOP-12689:
---

Thanks for your consideration Steve! I appreciate all the great work you did 
for the contract tests and your efforts to keep the S3 implementations stable. 
I'm testing and reviewing Matt's patch on HADOOP-12696. Let's continue there.

> S3 filesystem operations stopped working correctly
> --
>
> Key: HADOOP-12689
> URL: https://issues.apache.org/jira/browse/HADOOP-12689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12689.01.patch
>
>
> HADOOP-10542 was resolved by replacing "return null;" with throwing  
> IOException.   This causes several S3 filesystem operations to fail (possibly 
> more code is expecting that null return value; these are just the calls I 
> noticed):
> S3FileSystem.getFileStatus() (which no longer raises FileNotFoundException 
> but instead IOException)
> FileSystem.exists() (which no longer returns false but instead raises 
> IOException)
> S3FileSystem.create() (which no longer succeeds but instead raises 
> IOException)
> Run command:
> hadoop distcp hdfs://localhost:9000/test s3://xxx:y...@com.bar.foo/
> Resulting stack trace:
> 2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1449826461866_0005_m_06_0 - exited : java.io.IOException: /test 
> doesn't exist
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
> at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> changing the "raise IOE..." to "return null" fixes all of the above code 
> sites and allows distcp to succeed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089863#comment-15089863
 ] 

Chris Nauroth commented on HADOOP-12551:


[~dchickabasapa], thank you for the patch.  This looks good overall.  Just 2 
comments:

# I think there are going to be some warnings on code style, such as lines that 
are not less than 80 characters.  Clicking Submit Patch for a pre-commit run 
would reveal the specifics.
# The multi-threaded tests do this:
{code}
// Setting the value to 1000. This is to ensure that
// delete thread would get a chance to run at some point
// and would be able to delete the test file.
int maxCount = 1000;
int runCount = 1;
while (runCount < maxCount) {
  inputStream = fs.open(testPath);
  inputStream.close();
}
  }
{code}
Potentially a more reliable way to do this is:
{code}
while (t.isAlive()) {
  inputStream = fs.open(testPath);
  inputStream.close();
}
inputStream = fs.open(testPath);
inputStream.close();
{code}
...where {{t}} refers to the {{DeleteThread}} instance.  This would keep trying 
until the thread runs to completion.  The extra {{open}} outside the loop is 
for the edge case that the loop body doesn't get a chance to execute after the 
thread completes the delete.  The extra {{open}} would throw the 
{{FileNotFoundException}} in that case.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12696:
-
Attachment: log.hdfs
log.s3n
log.s3a

to copy.

   I dunno.   maybe.  maybe not.   I did see ContractOptions.IS_BLOBSTORE
in a
   few places in the test code when I was reviewing the failures.  But it
didn't seem
   like it would have fixed the test issues to me whichever way it pointed.

   But, it is easy enough to run the tests with various contracts.   I
removed
   all the forced skip @overrides from the test code and re-ran the tests
with
   the original s3.xml (== hdfs.xml) and also with s3a.xml and s3n.xml.
The
   same 10 tests still have issues, but the issues are different ;)

   I attached the output from all three test runs to this email.   summary:

   hdfs Tests run: 47, Failures: 3, Errors: 7, Skipped: 0
   s3n  Tests run: 47, Failures: 2, Errors: 6, Skipped: 2
   s3a  Tests run: 47, Failures: 3, Errors: 7, Skipped: 2

   the s3n case does seem to account for two of the errors and
   instead they are "auto skipped".  testCreatedFileIsImmediatelyVisible
   (which is the atomic file option I think) and
testOverwriteNonEmptyDirectory
   (not sure of the option).   I am not sure I understand the skip code...
   it is sort of like a intermediate state between fail and pass...


matt


> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, log.fail2, log.hdfs, log.s3a, 
> log.s3n, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2016-01-08 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089972#comment-15089972
 ] 

Jason Lowe commented on HADOOP-12107:
-

I recently ran across this on a NodeManager running 2.6 that had been up for a 
while.  Any objections to this being picked back to 2.6 and 2.7?

> long running apps may have a huge number of StatisticsData instances under 
> FileSystem
> -
>
> Key: HADOOP-12107
> URL: https://issues.apache.org/jira/browse/HADOOP-12107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
> HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch
>
>
> We observed with some of our apps (non-mapreduce apps that use filesystems) 
> that they end up accumulating a huge memory footprint coming from 
> {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
> {{Statistics}}).
> Although the thread reference from {{StatisticsData}} is a weak reference, 
> and thus can get cleared once a thread goes away, the actual 
> {{StatisticsData}} instances in the list won't get cleared until any of these 
> following methods is called on {{Statistics}}:
> - {{getBytesRead()}}
> - {{getBytesWritten()}}
> - {{getReadOps()}}
> - {{getLargeReadOps()}}
> - {{getWriteOps()}}
> - {{toString()}}
> It is quite possible to have an application that interacts with a filesystem 
> but does not call any of these methods on the {{Statistics}}. If such an 
> application runs for a long time and has a large amount of thread churn, the 
> memory footprint will grow significantly.
> The current workaround is either to limit the thread churn or to invoke these 
> operations occasionally to pare down the memory. However, this is still a 
> deficiency with {{FileSystem$Statistics}} itself in that the memory is 
> controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089922#comment-15089922
 ] 

Matthew Paduano commented on HADOOP-12696:
--

oops... I did not intend to reply to that address  :/

but since I did, I should also mention that I think those 10
issues can be fixed in one of S3FileSystem, S3InputStream
and/or the contract and AbstractFSContractTestBase.

> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, log.fail2, log.hdfs, log.s3a, 
> log.s3n, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12651) Replace dev-support with wrappers to Yetus

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090116#comment-15090116
 ] 

Hadoop QA commented on HADOOP-12651:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
48s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 34s 
{color} | {color:red} root in trunk failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 59s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 9m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} pylint {color} | {color:green} 0m 2s 
{color} | {color:green} There were no new pylint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
11s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 3m 35s 
{color} | {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 56s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 25m 41s {color} 
| {color:red} root in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 79m 45s {color} 
| {color:red} root in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 198m 47s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
| JDK v1.7.0_91 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | hadoop.hdfs.server.namenode.TestFsck |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781270/HADOOP-12651.02.patch 
|
| JIRA Issue | HADOOP-12651 |
| Optional Tests |  asflicense  shellcheck  pylint  mvnsite  unit  compile  
javac  javadoc  mvninstall  xml  |
| uname | Linux 6984d4661d43 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 

[jira] [Commented] (HADOOP-12107) long running apps may have a huge number of StatisticsData instances under FileSystem

2016-01-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090150#comment-15090150
 ] 

Sangjin Lee commented on HADOOP-12107:
--

+1

> long running apps may have a huge number of StatisticsData instances under 
> FileSystem
> -
>
> Key: HADOOP-12107
> URL: https://issues.apache.org/jira/browse/HADOOP-12107
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.7.0
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12107.001.patch, HADOOP-12107.002.patch, 
> HADOOP-12107.003.patch, HADOOP-12107.004.patch, HADOOP-12107.005.patch
>
>
> We observed with some of our apps (non-mapreduce apps that use filesystems) 
> that they end up accumulating a huge memory footprint coming from 
> {{FileSystem$Statistics$StatisticsData}} (in the {{allData}} list of 
> {{Statistics}}).
> Although the thread reference from {{StatisticsData}} is a weak reference, 
> and thus can get cleared once a thread goes away, the actual 
> {{StatisticsData}} instances in the list won't get cleared until any of these 
> following methods is called on {{Statistics}}:
> - {{getBytesRead()}}
> - {{getBytesWritten()}}
> - {{getReadOps()}}
> - {{getLargeReadOps()}}
> - {{getWriteOps()}}
> - {{toString()}}
> It is quite possible to have an application that interacts with a filesystem 
> but does not call any of these methods on the {{Statistics}}. If such an 
> application runs for a long time and has a large amount of thread churn, the 
> memory footprint will grow significantly.
> The current workaround is either to limit the thread churn or to invoke these 
> operations occasionally to pare down the memory. However, this is still a 
> deficiency with {{FileSystem$Statistics}} itself in that the memory is 
> controlled only as a side effect of those operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Attachment: HADOOP-12551.002.patch

[~cnauroth] Thanks a lot for the review. Attaching patch with the 
Multi-threaded test fix you suggested. 

While submitting the patch I built it using checkstyle flag, but didn't find 
any new style check warnings. So I am kicking of a QA build, to see if I am 
missing anything. 

Testing: The patch contains new tests to verify the changes made. Also changes 
have been tested against FileSystemContractLive tests for the both Block Blobs 
and Page Blobs.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090234#comment-15090234
 ] 

Haohui Mai edited comment on HADOOP-12698 at 1/9/16 12:03 AM:
--

I've been building trunk with JDK 8 for almost a year now and without any 
issues. I have some concerns on moving the default towards a JDK that has been 
marked as EOL.



was (Author: wheat9):
I've been building trunk with JDK 8 for almost a year now and without any 
issues.


> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090278#comment-15090278
 ] 

Hadoop QA commented on HADOOP-12551:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 36s 
{color} | {color:red} hadoop-tools/hadoop-azure introduced 2 new FindBugs 
issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-azure |
|  |  Possible null pointer dereference of listing in 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.listStatus(Path) on exception 
path  Dereferenced at NativeAzureFileSystem.java:listing in 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.listStatus(Path) on exception 
path  Dereferenced at NativeAzureFileSystem.java:[line 1961] |
|  |  Null passed for non-null parameter of 
conditionalRedoFolderRenames(PartialListing) in 
org.apache.hadoop.fs.azure.NativeAzureFileSystem.listStatus(Path)  Method 
invoked at NativeAzureFileSystem.java:of 

[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090307#comment-15090307
 ] 

Kai Sasaki commented on HADOOP-12698:
-

The error was caused by Javadoc error. Use of '_' induced only warning. 
https://issues.apache.org/jira/browse/HADOOP-11875?focusedCommentId=15089497=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15089497

So currently building with javadoc option causes error. My build command was 
this.
{code}
$ mvn package -Pdist,native,docs,src -DskipTests -Dtar
{code}

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Status: Patch Available  (was: Open)

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090234#comment-15090234
 ] 

Haohui Mai commented on HADOOP-12698:
-

I've been building trunk with JDK 8 for almost a year now and without any 
issues.


> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12635) Adding Append API support for WASB

2016-01-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090257#comment-15090257
 ] 

Chris Nauroth commented on HADOOP-12635:


[~dchickabasapa], thank you for the patch.  Here are a few comments.

# Please let me know if I'm missing something, but it appears there is a 
fundamental problem in that {{AzureNativeFileSystemStore#retrieveAppendStream}} 
is not atomic.  First, it checks the metadata for a prior append lease, and if 
found, throws an exception.  This logic is not coordinated on a lease, so 2 
concurrent processes could get past these checks for the same blob, and then 
call the {{BlockBlobAppendStream}} constructor.  Then, both processes would 
call {{BlockBlobAppendStream#updateBlobAppendMetadata}}.  One process would win 
the race for the blob lease and set the append lease in metadata.  The other 
process would block (not error) retrying blob lease acquisition.  (See 
{{SelfRenewingLease}} constructor.)  Eventually, it would acquire the blob 
lease, and set the append lease for itself in metadata.  At this point, there 
are 2 processes both owning a {{BlockBlobAppendStream}} pointing to the same 
blob.  I'm not sure what happens next.  Would both processes independently 
append and commit their own blocks?  Whatever happens, it's a violation of 
single-writer semantics.  In HDFS, this sequence is atomic, so it's guaranteed 
that one of those processes would have acquired the lease and the other would 
have experienced an exception.  Does the whole 
{{AzureNativeFileSystemStore#retrieveAppendStream}} method need to be guarded 
by the lease?
# Please do not start background threads or thread pools from within 
constructors.  This is a pitfall that can lead to tricky edge cases where the 
background thread sees the object in a partially constructed state.  The JVM 
can even reorder ops in funny ways, making the background thread see seemingly 
impossible state.  Instead, start threads from a separate {{initialize}} method 
that you can call right after the constructor.  This will guarantee that the 
object is in a consistent state before threads observe it.  I know 
{{SelfRenewingLease}} and other parts of the Hadoop code start threads from a 
constructor.  It's not a good thing.  :-)
# {{APPEND_LEASE_TIMEOUT}} is 30 seconds and {{LEASE_RENEWAL_PERIOD}} is also 
30 seconds.  That's going to cut it pretty close.  If another process tries to 
append right at the 30-second mark, then the lease renewal might not get ahead 
of it.  FWIW, the HDFS client typically uses a 1-second renewal period and the 
soft expiration is 60 seconds, beyond which another client may claim the lease.
# Various metadata operations like {{setOwner}} don't coordinate on the append 
lease.  I think that means any such metadata operations running concurrently 
with append activity would risk overwriting {{append_lease_last_modified}} with 
an old value, so you could experience lost updates.  Maybe this is OK in 
practice if the renewal period is made more frequent as per above?
# The HDFS client previously started a separate renewal thread per lease, much 
like what {{BlockBlobAppendStream}} does here.  This eventually became a 
scalability bottleneck with too many threads in applications that open multiple 
files concurrently.  We evolved to a design of a single lease renewer thread 
capable of servicing all renewal activity.  Let's keep this in mind as a future 
enhancement if excessive threads starts to become a problem.
# In general, it's not necessary to call {{toString()}} explicitly for 
exception and log messages.  Particularly in cases like this:
{code}
LOG.debug("Opening file: {} for append", f.toString());
{code}
If you just pass {{f}}, then the interface of SLF4J accepts the {{f}} instance 
and only calls {{toString()}} internally if debug level is enabled.  It 
probably doesn't matter much for {{Path#toString}}, but it can improve 
performance if the class has a more expensive {{toString()}} implementation.
# I see style issues like lines that are too long.  A pre-commit run would find 
all such problems.


> Adding Append API support for WASB
> --
>
> Key: HADOOP-12635
> URL: https://issues.apache.org/jira/browse/HADOOP-12635
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: Append API.docx, HADOOP-12635.001.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support 
> Append API. This JIRA is added to design and implement the Append API support 
> to WASB. The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Dushyanth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090289#comment-15090289
 ] 

Dushyanth commented on HADOOP-12551:


[~cnauroth] I see that there are two findbugs warnings in the QA run. I have 
commented the code explaining that we would not be hitting the 
NullReferenceException in the code path that the warnings are raised. I am 
leaving this as a No-op unless you feel it needs to be fixed.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090297#comment-15090297
 ] 

Chris Nauroth commented on HADOOP-12551:


bq. While submitting the patch I built it using checkstyle flag, but didn't 
find any new style check warnings.

I forgot that hadoop-azure has an overridden checkstyle.xml.  That's something 
to clean up and align with the rest of the codebase at some point, but not in 
scope of this JIRA.

bq. I have commented the code explaining that we would not be hitting the 
NullReferenceException in the code path that the warnings are raised.

I think Findbugs correctly spotted a bug, but it's in the exception handling:

{code}
  if (renamed) {
   listing = null;
   try {
 listing = store.list(key, AZURE_LIST_ALL, 1, partialKey);
   } catch (IOException ex) {
 Throwable innerException = 
NativeAzureFileSystem.checkForAzureStorageException(ex);

 if (innerException instanceof StorageException) {
   if (NativeAzureFileSystem.isFileNotFoundException((StorageException) 
innerException)) {
 throw new FileNotFoundException(String.format("%s is not found", 
key));
   }
 } else {
   throw ex;
 }
   }
  }
{code}

If the exception is a {{StorageException}}, but not a 
{{FileNotFoundException}}, then the {{StorageException}} is swallowed instead 
of propagated to the caller.  Findbugs has identified correctly that there is a 
potential code path for execution to continue and dereference {{listing}}, 
which would be {{null}}.  I recommend changing the exception handling to this:

{code}
   } catch (IOException ex) {
 Throwable innerException = 
NativeAzureFileSystem.checkForAzureStorageException(ex);

 if (innerException != null &&
 innerException instanceof StorageException &&
 NativeAzureFileSystem.isFileNotFoundException((StorageException) 
innerException)) {
   throw new FileNotFoundException(String.format("%s is not found", 
key));
 } else {
   throw ex;
 }
   }
{code}


> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090330#comment-15090330
 ] 

Hadoop QA commented on HADOOP-12587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
37s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 7s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 new checkstyle issues in 
hadoop-common-project (total was 56, now 56). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 44s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 11s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 13s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 7s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 97m 58s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.ha.TestZKFailoverController |
|   | hadoop.ipc.TestIPC |
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.security.ssl.TestReloadingX509TrustManager 

[jira] [Commented] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090080#comment-15090080
 ] 

Xiao Chen commented on HADOOP-12699:


A sample failure is pasted here, but I don't think there's much information.

Error Message
{noformat}
Values should be different. Actual: k6@0
{noformat}
Stacktrace
{noformat}
java.lang.AssertionError: Values should be different. Actual: k6@0
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failEquals(Assert.java:185)
at org.junit.Assert.assertNotEquals(Assert.java:161)
at org.junit.Assert.assertNotEquals(Assert.java:175)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$2.call(TestKMS.java:649)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS$2.call(TestKMS.java:413)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:130)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.runServer(TestKMS.java:112)
at 
org.apache.hadoop.crypto.key.kms.server.TestKMS.testKMSProvider(TestKMS.java:413)
{noformat}
Standard Output
{noformat}
Test KMS running at: http://localhost:50665/kms
2016-01-05 15:02:01,885 ERROR NIOServerCnxnFactory - Thread 
Thread[org.apache.hadoop.crypto.key.kms.ValueQueue_thread,5,main] died
java.lang.RuntimeException: java.io.IOException: java.io.IOException: Exeption 
while contacting value generator 
at 
org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: java.io.IOException: Exeption while contacting 
value generator 
at sun.reflect.GeneratedConstructorAccessor68.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at 
org.apache.hadoop.util.HttpExceptionUtils.validateResponse(HttpExceptionUtils.java:157)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:546)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.call(KMSClientProvider.java:504)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider.access$200(KMSClientProvider.java:84)
at 
org.apache.hadoop.crypto.key.kms.KMSClientProvider$EncryptedQueueRefiller.fillQueueForKey(KMSClientProvider.java:135)
at 
org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:330)
... 3 more
2016-01-05 15:02:01,886 ERROR NIOServerCnxnFactory - Thread 
Thread[org.apache.hadoop.crypto.key.kms.ValueQueue_thread,5,main] died
java.lang.RuntimeException: java.lang.NullPointerException: No KeyVersion 
exists for key 'k1' 
at 
org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: No KeyVersion exists for key 'k1' 
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:252)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
at 
org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:77)
at 
org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:330)
... 3 more
2016-01-05 15:02:01,886 ERROR NIOServerCnxnFactory - Thread 
Thread[org.apache.hadoop.crypto.key.kms.ValueQueue_thread,5,main] died
java.lang.RuntimeException: java.lang.NullPointerException: No KeyVersion 
exists for key 'k1' 
at 
org.apache.hadoop.crypto.key.kms.ValueQueue$2.run(ValueQueue.java:334)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: No KeyVersion exists for key 'k1' 
at 
com.google.common.base.Preconditions.checkNotNull(Preconditions.java:231)
at 
org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:252)
at 

[jira] [Commented] (HADOOP-12622) RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on retry failed reason or the log from RMProxy's retry could be very misleading.

2016-01-08 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089633#comment-15089633
 ] 

Junping Du commented on HADOOP-12622:
-

The test failure in testGangliaMetrics2 is not related. [~ste...@apache.org], 
mind to take a look at the patch again?

> RetryPolicies (other than FailoverOnNetworkExceptionRetry) should put on 
> retry failed reason or the log from RMProxy's retry could be very misleading.
> --
>
> Key: HADOOP-12622
> URL: https://issues.apache.org/jira/browse/HADOOP-12622
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.6.0, 2.7.0
>Reporter: Junping Du
>Assignee: Junping Du
>Priority: Critical
> Attachments: HADOOP-12622-v2.patch, HADOOP-12622-v3.1.patch, 
> HADOOP-12622-v3.patch, HADOOP-12622.patch
>
>
> In debugging a NM retry connection to RM (non-HA), the NM log during RM down 
> time is very misleading:
> {noformat}
> 2015-12-07 11:37:14,098 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:15,099 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:16,101 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:17,103 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:18,105 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:19,107 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 5 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:20,109 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 6 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:21,112 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 7 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:22,113 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 8 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:23,115 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 9 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:54,120 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 0 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:55,121 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 1 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:56,123 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 2 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:57,125 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 3 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:58,126 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 4 time(s); retry policy is 
> RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 
> MILLISECONDS)
> 2015-12-07 11:37:59,128 INFO org.apache.hadoop.ipc.Client: Retrying connect 
> to server: 0.0.0.0/0.0.0.0:8031. Already tried 

[jira] [Commented] (HADOOP-11262) Enable YARN to use S3A

2016-01-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089720#comment-15089720
 ] 

Lei (Eddy) Xu commented on HADOOP-11262:


[~ste...@apache.org] Sure, will do.  I pinged [~Pieter Reuse] and he said there 
will be a new patch to address Chirs' comments. I will commit after that.

> Enable YARN to use S3A 
> ---
>
> Key: HADOOP-11262
> URL: https://issues.apache.org/jira/browse/HADOOP-11262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Thomas Demoor
>Assignee: Pieter Reuse
>  Labels: amazon, s3
> Attachments: HADOOP-11262-2.patch, HADOOP-11262-3.patch, 
> HADOOP-11262-4.patch, HADOOP-11262-5.patch, HADOOP-11262-6.patch, 
> HADOOP-11262-7.patch, HADOOP-11262-8.patch, HADOOP-11262.patch
>
>
> Uses DelegateToFileSystem to expose S3A as an AbstractFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089047#comment-15089047
 ] 

Steve Loughran commented on HADOOP-12649:
-

note also that having a specific kerberos subclass of IOE means that retry 
handlers can bail out on a kerberos problem. There is no point retrying on a 
kerberos failure, as it isn't going to go away

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12552) Fix undeclared/unused dependency to httpclient

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089060#comment-15089060
 ] 

Steve Loughran commented on HADOOP-12552:
-

given that azure ships, I do think it should go in ... it just needs to be 
something to be aware of.

or: you split it up so the azure side patch goes in, but the hadoop-common one 
is postponed to trunk

> Fix undeclared/unused dependency to httpclient
> --
>
> Key: HADOOP-12552
> URL: https://issues.apache.org/jira/browse/HADOOP-12552
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Masatake Iwasaki
>Assignee: Masatake Iwasaki
>Priority: Minor
>  Labels: incompatible
> Attachments: HADOOP-12552.001.patch
>
>
> hadoop-common uses httpclient as undeclared dependency and have unused 
> dependency to commons-httpclient. Vise versa in hadoop-azure and 
> hadoop-openstack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089072#comment-15089072
 ] 

Vinayakumar B commented on HADOOP-12687:


bq. If essentially undoes the security check in getByExactName(). When doing 
hostname lookups, the hostname must be rooted(“.” added to the end to avoid the 
security hole in RFC 1535). This patch undoes that check.
After seeing the RFC 1535, I agree that direct look up without trailing dot may 
connect to unauthorized machine or wrong machine after searching through 
different search domains.
But in current case, with patch, direct look-up is being done after all check 
is done including trailing dot and search domains.
Is it still a RFC violation to lookup for direct host?

below code itself throws {{UnKnownHostException}}, i.e. its not able to resolve 
its own hostname. This happens only in linux(ubuntu), works fine in windows 
though.
{code}SecurityUtil.getByName(InetSocketAddress.getLocalhost().getHostName()){code}

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12697) IPC retry policies should recognise that SASL auth failures are unrecoverable

2016-01-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12697:
---

 Summary: IPC retry policies should recognise that SASL auth 
failures are unrecoverable
 Key: HADOOP-12697
 URL: https://issues.apache.org/jira/browse/HADOOP-12697
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.7.1
 Environment: Cluster with kerberos on and client not calling with the 
right credentials
Reporter: Steve Loughran
Priority: Minor


SLIDER-1050 shows that if you don't have the right kerberos settings, the Yarn 
client IPC channel blocks retrying to the talk to the RM, retrying repeatedly

{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server : javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
{noformat}

SASL exceptions need to be recognised as irreconcilable authentication 
failures, rather than generic IOEs that might go away if you retry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090439#comment-15090439
 ] 

Hadoop QA commented on HADOOP-12587:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 35s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 44s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 36s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 40s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 1s 
{color} | {color:green} hadoop-auth in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 55s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 79m 26s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781368/HADOOP-12587-003.patch
 |
| JIRA Issue | HADOOP-12587 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux e19b8263fa24 

[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12678:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v06.  I have committed this to trunk, branch-2 and branch-2.8.  
[~madhuch-ms], thank you for contributing this patch.

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090450#comment-15090450
 ] 

Hudson commented on HADOOP-12678:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9076 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9076/])
HADOOP-12678. Handle empty rename pending metadata file during atomic 
(cnauroth: rev f0fa6d869b9abb5a900ea1c9eb4eb19ec9831dc4)
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/NativeAzureFileSystemBaseTest.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12590:

Attachment: HADOOP-12590.001.patch

[~stevel] and [~ weichiu], let me know whether the fix is what you have in mind.

Patch 001
* Replace fail calls with GenericTestUtils.assertExceptionContains.

> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12590) TestCompressorDecompressor failing without stack traces

2016-01-08 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-12590:

Status: Patch Available  (was: Open)

> TestCompressorDecompressor failing without stack traces
> ---
>
> Key: HADOOP-12590
> URL: https://issues.apache.org/jira/browse/HADOOP-12590
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.8.0
> Environment: jenkins
>Reporter: Steve Loughran
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-12590.001.patch
>
>
> Jenkins failing on {{TestCompressorDecompressor}}.
> The exception is being caught and converted to a fail *so there is no stack 
> trace of any value*
> {code}
> testCompressorDecompressor error !!!java.lang.NullPointerException
> Stacktrace
> java.lang.AssertionError: testCompressorDecompressor error 
> !!!java.lang.NullPointerException
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor(TestCompressorDecompressor.java:69)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12689) S3 filesystem operations stopped working correctly

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090006#comment-15090006
 ] 

Steve Loughran commented on HADOOP-12689:
-

w.r.t to HADOOP-10542, I though there were tests in the system. i shouldn't 
have committed it then —and we are all suffering now because of that

> S3 filesystem operations stopped working correctly
> --
>
> Key: HADOOP-12689
> URL: https://issues.apache.org/jira/browse/HADOOP-12689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12689.01.patch
>
>
> HADOOP-10542 was resolved by replacing "return null;" with throwing  
> IOException.   This causes several S3 filesystem operations to fail (possibly 
> more code is expecting that null return value; these are just the calls I 
> noticed):
> S3FileSystem.getFileStatus() (which no longer raises FileNotFoundException 
> but instead IOException)
> FileSystem.exists() (which no longer returns false but instead raises 
> IOException)
> S3FileSystem.create() (which no longer succeeds but instead raises 
> IOException)
> Run command:
> hadoop distcp hdfs://localhost:9000/test s3://xxx:y...@com.bar.foo/
> Resulting stack trace:
> 2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1449826461866_0005_m_06_0 - exited : java.io.IOException: /test 
> doesn't exist
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
> at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> changing the "raise IOE..." to "return null" fixes all of the above code 
> sites and allows distcp to succeed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-12699:
--

 Summary: TestKMS#testKMSProvider intermittently fails during 'test 
rollover draining'
 Key: HADOOP-12699
 URL: https://issues.apache.org/jira/browse/HADOOP-12699
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Xiao Chen
Assignee: Xiao Chen


I've seen several failures of testKMSProvider, all failed in the following 
snippet:
{code}
// test rollover draining
KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
createKeyProviderCryptoExtension(kp);
.

EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
kpce.rollNewVersion("k6");
EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
ekv2.getEncryptionKeyVersionName());
{code}
with error message
{quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090106#comment-15090106
 ] 

Xiao Chen commented on HADOOP-12699:


This can be reproduced by running the following codes in a loop (plus the 
assert). I can reproduce it within a handful of runs usually.
{code}
EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
kpce.rollNewVersion("k6");
EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
{code}
This test is added in HADOOP-11071. From my understanding, the problem is from 
the async thread(s) in {{ValueQueue}}. (Probably what [~andrew.wang] said in 
[the comment in 
HADOOP-11071|https://issues.apache.org/jira/browse/HADOOP-11071?focusedCommentId=14125826=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14125826]).

If I comment out the line {{submitRefillTask(keyName, keyQueue);}} in 
{{ValueQueue#getAtMost}}, the looped run can easily pass 10k runs without 
failing.

I'm unsure whether this should be considered as a test only issue, or a bug. 
I'll have more readings first, and work on a solution.

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12041) Implement another Reed-Solomon coder in pure Java

2016-01-08 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090120#comment-15090120
 ] 

Zhe Zhang commented on HADOOP-12041:


Agreed, let's do the rename separately.

> Implement another Reed-Solomon coder in pure Java
> -
>
> Key: HADOOP-12041
> URL: https://issues.apache.org/jira/browse/HADOOP-12041
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12041-v1.patch, HADOOP-12041-v2.patch, 
> HADOOP-12041-v3.patch, HADOOP-12041-v4.patch, HADOOP-12041-v5.patch
>
>
> Currently existing Java RS coders based on {{GaloisField}} implementation 
> have some drawbacks or limitations:
> * The decoder computes not really erased units unnecessarily (HADOOP-11871);
> * The decoder requires parity units + data units order for the inputs in the 
> decode API (HADOOP-12040);
> * Need to support or align with native erasure coders regarding concrete 
> coding algorithms and matrix, so Java coders and native coders can be easily 
> swapped in/out and transparent to HDFS (HADOOP-12010);
> * It's unnecessarily flexible but incurs some overhead, as HDFS erasure 
> coding is totally a byte based data system, we don't need to consider other 
> symbol size instead of 256.
> This desires to implement another  RS coder in pure Java, in addition to the 
> existing {{GaliosField}} from HDFS-RAID. The new Java RS coder will be 
> favored and used by default to resolve the related issues. The old HDFS-RAID 
> originated coder will still be there for comparing, and converting old data 
> from HDFS-RAID systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-08 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-12587:
--
Attachment: HADOOP-12587-002.patch

Attaching a newer patch since the previous patch does not apply cleanly. Its a 
documentation change.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2016-01-08 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-12587:
--
Attachment: HADOOP-12587-003.patch

Attaching the patch which fixes the checkstyle issue

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch, HADOOP-12587-002.patch, 
> HADOOP-12587-003.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090416#comment-15090416
 ] 

Hadoop QA commented on HADOOP-12696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 11 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 35s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 51s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 51s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 59s 
{color} | {color:red} Patch generated 3 new checkstyle issues in root (total 
was 52, now 52). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 24s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 16s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 14s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 4s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.TestFsShellReturnCode |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  

[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Dushyanth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090354#comment-15090354
 ] 

Dushyanth commented on HADOOP-12551:


Testing done for Patch v003 - The newly added tests were executed along with 
Contract Live tests for Block and Page blobs.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch, 
> HADOOP-12551.003.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12551:
---
Attachment: HADOOP-12551.003.patch

[~cnauroth] This actually is an issue with the way the try-catch. We should not 
be swallowing the storage exception in scenarios where its a StorageException 
and not FileNotFoundException. There were other places where I had used this 
paradigm, I have fixed those try-catch blocks.

> Introduce FileNotFoundException for WASB FileSystem API
> ---
>
> Key: HADOOP-12551
> URL: https://issues.apache.org/jira/browse/HADOOP-12551
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.1
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-12551.001.patch, HADOOP-12551.002.patch, 
> HADOOP-12551.003.patch
>
>
> HADOOP-12533 introduced FileNotFoundException to the read and seek API for 
> WASB. The open and getFileStatus api currently throws FileNotFoundException 
> correctly when the file does not exists when the API is called but does not 
> throw the same exception if there is another thread/process deletes the file 
> during its execution. This Jira fixes that behavior.
> This jira also re-examines other Azure storage store calls to check for 
> BlobNotFoundException in setPermission(), setOwner, rename(), delete(), 
> open(), listStatus() APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12699:
---
Attachment: HADOOP-12699.repro.patch

Attached a patch to reproduce the failure, as I described above.
I'll post a fix soon, so this reproduce patch also includes my proposed fix. 
Commenting out the {{conf.setBoolean}} in {{TestKMS#testKMSProvider}} will 
surface the failure.

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.repro.patch
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12699:
---
Attachment: HADOOP-12699.01.patch

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.repro.patch
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12551) Introduce FileNotFoundException for WASB FileSystem API

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090370#comment-15090370
 ] 

Hadoop QA commented on HADOOP-12551:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
22s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781359/HADOOP-12551.003.patch
 |
| JIRA Issue | HADOOP-12551 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 2371df030153 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 109e528 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12696:
-
Attachment: log.patch02

log.patch02 is the output of running:

mvn -l /tmp/log test -Ptests-on 
-Dtest=org.apache.hadoop.fs.contract.TestS3Contract*

> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, HADOOP-12696.02.patch, log.fail2, 
> log.hdfs, log.patch02, log.s3a, log.s3n, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Matthew Paduano (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthew Paduano updated HADOOP-12696:
-
Attachment: HADOOP-12696.02.patch

> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, HADOOP-12696.02.patch, log.fail2, 
> log.hdfs, log.patch02, log.s3a, log.s3n, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12696) Add Tests for S3FileSystem Contract

2016-01-08 Thread Matthew Paduano (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090382#comment-15090382
 ] 

Matthew Paduano commented on HADOOP-12696:
--

patch02 contains fixes for S3FileSystem, S3InputStream and one additional 
ContractOption with corresponding code in the SeekTest.

Tests run: 47, Failures: 0, Errors: 0, Skipped: 1


> Add Tests for S3FileSystem Contract
> ---
>
> Key: HADOOP-12696
> URL: https://issues.apache.org/jira/browse/HADOOP-12696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12696.01.patch, HADOOP-12696.02.patch, log.fail2, 
> log.hdfs, log.patch02, log.s3a, log.s3n, log.succ
>
>
> The regression fixed by HADOOP-12689 had no unit tests to expose the problem. 
>   Add filesystem tests according to 
> http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/filesystem/testing.html
>  for the s3 scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-01-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15090374#comment-15090374
 ] 

Xiao Chen commented on HADOOP-12699:


>From [~tucu00]'s [comment in 
>HADOOP-11071|https://issues.apache.org/jira/browse/HADOOP-11071?focusedCommentId=14125860=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14125860],
> I feel this should be a test-only problem, and not a real scenario. Andrew 
>and Alejandro, please correct me if I'm wrong about this.

Patch 1 is attached to fix the failure for the test. My thought is to disable 
the async filler tasks, since we just want to test the functionality of cached 
keys before and after rolling. I added a test-only configuration, to disable 
the async completely. Please review. Thanks!

FYI - I initially wanted to change the configurations around the water mark to 
disable it, since {{ValueQueue#getAtMost}} has:
{code}
if (i <= (int) (lowWatermark * numValues)) {
{code}
But {{ValueQueue}} validates the parameters in constructor, so that's not 
doable. 

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.repro.patch
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-01-08 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15088969#comment-15088969
 ] 

jack liuquan commented on HADOOP-11828:
---

Hi kai,
The report urls of last build have been not available, how can I get the report 
again?


> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-01-08 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089005#comment-15089005
 ] 

jack liuquan commented on HADOOP-11828:
---

OK, Thanks, Kai.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-01-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089001#comment-15089001
 ] 

Kai Zheng commented on HADOOP-11828:


I guess it's flushed out. So you may cancel and then submit your patch again to 
trigger the building again? Or just update the patch to wait for another round.

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089211#comment-15089211
 ] 

Steve Loughran commented on HADOOP-12649:
-

+ make {{"hadoop.kerberos.kinit.command"}} a string constant rather than a 
string in UGI

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089250#comment-15089250
 ] 

Hadoop QA commented on HADOOP-12698:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
8s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 17s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12781216/HADOOP-12698.01.patch 
|
| JIRA Issue | HADOOP-12698 |
| Optional Tests |  asflicense  shellcheck  |
| uname | Linux 2a2c5082d5ca 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / ed18527 |
| shellcheck | v0.4.1 |
| modules | C:  U:  |
| Max memory used | 29MB |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8364/console |


This message was automatically generated.



> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-12698:

Status: Patch Available  (was: Open)

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089210#comment-15089210
 ] 

Steve Loughran commented on HADOOP-12649:
-

SLIDER-1035 is where I'm implementing the core of the diagnostics, with a plan 
to move over

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Kai Sasaki (JIRA)
Kai Sasaki created HADOOP-12698:
---

 Summary: Set default Docker build uses JDK7
 Key: HADOOP-12698
 URL: https://issues.apache.org/jira/browse/HADOOP-12698
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kai Sasaki
Assignee: Kai Sasaki
Priority: Minor


The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
(HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089247#comment-15089247
 ] 

Steve Loughran commented on HADOOP-11875:
-

Are you sure its java8 which forbids this -or merely warns about it?

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12697) IPC retry policies should recognise that SASL auth failures are unrecoverable

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12697:

Description: 
SLIDER-1050 shows that if you don't have the right kerberos settings, the Yarn 
client IPC channel blocks retrying to the talk to the RM, retrying repeatedly

{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server :
 javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException:
 No valid credentials provided (Mechanism level: Failed to find any Kerberos 
tgt)]
{noformat}

SASL exceptions need to be recognised as irreconcilable authentication 
failures, rather than generic IOEs that might go away if you retry

  was:
SLIDER-1050 shows that if you don't have the right kerberos settings, the Yarn 
client IPC channel blocks retrying to the talk to the RM, retrying repeatedly

{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server :
 javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
{noformat}

SASL exceptions need to be recognised as irreconcilable authentication 
failures, rather than generic IOEs that might go away if you retry


> IPC retry policies should recognise that SASL auth failures are unrecoverable
> -
>
> Key: HADOOP-12697
> URL: https://issues.apache.org/jira/browse/HADOOP-12697
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
> Environment: Cluster with kerberos on and client not calling with the 
> right credentials
>Reporter: Steve Loughran
>Priority: Minor
>
> SLIDER-1050 shows that if you don't have the right kerberos settings, the 
> Yarn client IPC channel blocks retrying to the talk to the RM, retrying 
> repeatedly
> {noformat}
> 2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
> connecting to the server :
>  javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException:
>  No valid credentials provided (Mechanism level: Failed to find any Kerberos 
> tgt)]
> {noformat}
> SASL exceptions need to be recognised as irreconcilable authentication 
> failures, rather than generic IOEs that might go away if you retry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-12698:

Attachment: HADOOP-12698.01.patch

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089238#comment-15089238
 ] 

Hadoop QA commented on HADOOP-12698:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8364/console in case of 
problems.


> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12697) IPC retry policies should recognise that SASL auth failures are unrecoverable

2016-01-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-12697:

Description: 
SLIDER-1050 shows that if you don't have the right kerberos settings, the Yarn 
client IPC channel blocks retrying to the talk to the RM, retrying repeatedly

{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server :
 javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Failed to find 
any Kerberos tgt)]
{noformat}

SASL exceptions need to be recognised as irreconcilable authentication 
failures, rather than generic IOEs that might go away if you retry

  was:
SLIDER-1050 shows that if you don't have the right kerberos settings, the Yarn 
client IPC channel blocks retrying to the talk to the RM, retrying repeatedly

{noformat}
2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
connecting to the server : javax.security.sasl.SaslException: GSS initiate 
failed [Caused by GSSException: No valid credentials provided (Mechanism level: 
Failed to find any Kerberos tgt)]
{noformat}

SASL exceptions need to be recognised as irreconcilable authentication 
failures, rather than generic IOEs that might go away if you retry


> IPC retry policies should recognise that SASL auth failures are unrecoverable
> -
>
> Key: HADOOP-12697
> URL: https://issues.apache.org/jira/browse/HADOOP-12697
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
> Environment: Cluster with kerberos on and client not calling with the 
> right credentials
>Reporter: Steve Loughran
>Priority: Minor
>
> SLIDER-1050 shows that if you don't have the right kerberos settings, the 
> Yarn client IPC channel blocks retrying to the talk to the RM, retrying 
> repeatedly
> {noformat}
> 2016-01-07 02:50:45,111 [main] WARN  ipc.Client - Exception encountered while 
> connecting to the server :
>  javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: Failed to find 
> any Kerberos tgt)]
> {noformat}
> SASL exceptions need to be recognised as irreconcilable authentication 
> failures, rather than generic IOEs that might go away if you retry



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12234) Web UI Framable Page

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089380#comment-15089380
 ] 

Hadoop QA commented on HADOOP-12234:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 4s {color} 
| {color:red} HADOOP-12234 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12746655/HADOOP-12234-v3-master.patch
 |
| JIRA Issue | HADOOP-12234 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8366/console |


This message was automatically generated.



> Web UI Framable Page
> 
>
> Key: HADOOP-12234
> URL: https://issues.apache.org/jira/browse/HADOOP-12234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HADOOP-12234-v2-master.patch, 
> HADOOP-12234-v3-master.patch, HADOOP-12234.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.  
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089439#comment-15089439
 ] 

Marco Zühlke commented on HADOOP-11875:
---

http://docs.oracle.com/javase/specs/jls/se8/html/jls-15.html#jls-15.27.1
{quote}
It is a compile-time error if a lambda parameter has the name _ (that is, a 
single underscore character).

The use of the variable name _ in any context is discouraged. Future versions 
of the Java programming language may reserve this name as a keyword and/or give 
it special semantics. 
{quote}

So for the time being only a warning.

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089477#comment-15089477
 ] 

Kai Sasaki commented on HADOOP-11875:
-

As I attached build_error_dump.txt, it seems to be ERROR due to JDK8 
incompatible change.

{code}
[ERROR] (use of '_' as an identifier might not be supported in releases after 
Java SE 8)
{code}

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12261) Surefire needs to make sure the JVMs it fires up fit within the memory available

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089340#comment-15089340
 ] 

Steve Loughran commented on HADOOP-12261:
-

Catching up on this constructive discussion showing open source communities at 
work, can I point out that this is something you can actually tune on the 
command line,

{code}
mvn clean test "-Dmaven-surefire-plugin.argLine=-Xmx6G"
{code}

It's not something we need to change just yet, unless its critical everywhere 
—and if we do change it, anyone who does want to try and build & test on 32 
bits can still change the option to a value they can handle.

On Java 8 you'd want to turn off the permgen value too, just to avoid being 
told off by the JVM for setting it.


> Surefire needs to make sure the JVMs it fires up fit within the memory 
> available
> 
>
> Key: HADOOP-12261
> URL: https://issues.apache.org/jira/browse/HADOOP-12261
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: HADOOP-12261.001.patch
>
>
> hadoop-project/pom.xml sets maven-surefire-plugin.argLine to include 
> -Xmx4096m. Allocating  that amount of memory requires a 64-bit JVM, but on 
> platforms with both 32 and 64-bit JVMs surefire runs the 32 bit version by 
> default and tests fail to start as a result. "-d64" should be added to the 
> command-line arguments to ensure a 64-bit JVM is always used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089518#comment-15089518
 ] 

Akira AJISAKA commented on HADOOP-11875:


The javadoc error was caused by YARN-4438 and I asked the assignee to fix this.

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089497#comment-15089497
 ] 

Akira AJISAKA commented on HADOOP-11875:


bq. Are you sure its java8 which forbids this -or merely warns about it?
Java8 merely warns about it.

Now trunk build is failing because of the following doc error.
{code}
[ERROR] 
/Users/aajisaka/git/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/src/main/java/org/apache/hadoop/yarn/server/resourcemanager/ResourceManager.java:1009:
 error: exception not thrown: java.lang.Exception
[ERROR] * @throws Exception
[ERROR] ^
{code}

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089552#comment-15089552
 ] 

Steve Loughran commented on HADOOP-9844:


..pushed out new PR, commit #f04ae77015c

# checks for IOE.getMessage() being null, falling back to IOE.toString()
# final check in {{setupResponse()}} for null error cause or text, and 
insertion of warning notes. The previous fix should catch it, but this extra 
section ensures that the response fields are never null

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12556) KafkaSink jar files are created but not copied to target dist

2016-01-08 Thread Babak Behzad (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089578#comment-15089578
 ] 

Babak Behzad commented on HADOOP-12556:
---

Sure, sounds good to me! Thanks [~steve_l]. What do you think [~raviprak] and 
[~aw]?

> KafkaSink jar files are created but not copied to target dist
> -
>
> Key: HADOOP-12556
> URL: https://issues.apache.org/jira/browse/HADOOP-12556
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Attachments: HADOOP-12556.patch
>
>
> There is a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
> which was causing the compiled Kafka jar files not to be copied to the target 
> dist directory. The new patch adds this in order to complete this fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12234) Web UI Framable Page

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089358#comment-15089358
 ] 

Steve Loughran commented on HADOOP-12234:
-

reviewing this, I am pleased to see that we don't need to care about IE7 any 
more. Which is good, as nobody was going to test it anyway.

a filter in hadoop-common seems the best place for it. The main issue is: what 
turns it on and where? I'm with Haohui here: make it something projects 
explicitly turn on/off if they choose. HDFS's needs "part of a management 
console" are different from a YARN app where that's not a perceived use case.

On that topic, we'd probably recommend that YARN apps use it too, wouldn't we? 
Or at least have the RM proxy add it when filtering requests, which would give 
it to the apps automatically.

> Web UI Framable Page
> 
>
> Key: HADOOP-12234
> URL: https://issues.apache.org/jira/browse/HADOOP-12234
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Appy
>Assignee: Appy
> Attachments: HADOOP-12234-v2-master.patch, 
> HADOOP-12234-v3-master.patch, HADOOP-12234.patch
>
>
> The web UIs do not include the "X-Frame-Options" header to prevent the pages 
> from being framed from another site.  
> Reference:
> https://www.owasp.org/index.php/Clickjacking
> https://www.owasp.org/index.php/Clickjacking_Defense_Cheat_Sheet
> https://developer.mozilla.org/en-US/docs/Web/HTTP/X-Frame-Options



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12698) Set default Docker build uses JDK7

2016-01-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089444#comment-15089444
 ] 

Allen Wittenauer commented on HADOOP-12698:
---

JAVA_HOME also needs to get set.

> Set default Docker build uses JDK7
> --
>
> Key: HADOOP-12698
> URL: https://issues.apache.org/jira/browse/HADOOP-12698
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>Priority: Minor
>  Labels: build, docker
> Attachments: HADOOP-12698.01.patch
>
>
> The default JDK of build environment made by {{start-build-env.sh}} is JDK8 
> from HADOOP-12562. Since current Hadoop trunk cannot be build by JDK8 
> (HADOOP-11875), it is better to set the default JDK to JDK7 for now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-11875:

Attachment: build_error_dump.txt

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089345#comment-15089345
 ] 

Steve Loughran commented on HADOOP-12253:
-

Looking at this code, {{ugi.getGroupNames().length < 1 ? null : 
ugi.getGroupNames()[0];}} re-occurs in 3 places. It should be factored out into 
some method called "getFirstGroup(ugi)". Make it static and you can then add 
the test that this patch needs

> ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0
> 
>
> Key: HADOOP-12253
> URL: https://issues.apache.org/jira/browse/HADOOP-12253
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.6.0
> Environment: hadoop 2.6.0   hive 1.1.0 tez0.7  cenos6.4
>Reporter: tangjunjie
>Assignee: Ajith S
> Attachments: HADOOP-12253.patch
>
>
> When I enable hdfs federation.I run a query on hive on tez. Then it occur a 
> exception:
> {noformat}
> 8.784 PM  WARNorg.apache.hadoop.security.UserGroupInformation No 
> groups available for user tangjijun
> 3:12:28.784 PMERROR   org.apache.hadoop.hive.ql.exec.Task Failed 
> to execute tez graph.
> java.lang.ArrayIndexOutOfBoundsException: 0
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem$InternalDirOfViewFs.getFileStatus(ViewFileSystem.java:771)
>   at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.getFileStatus(ViewFileSystem.java:359)
>   at 
> org.apache.tez.client.TezClientUtils.checkAncestorPermissionsForAllUsers(TezClientUtils.java:955)
>   at 
> org.apache.tez.client.TezClientUtils.setupTezJarsLocalResources(TezClientUtils.java:184)
>   at 
> org.apache.tez.client.TezClient.getTezJarResources(TezClient.java:787)
>   at org.apache.tez.client.TezClient.start(TezClient.java:337)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionState.open(TezSessionState.java:191)
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezTask.updateSession(TezTask.java:234)
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:136)
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)
>   at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1640)
>   at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1399)
>   at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1183)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)
>   at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1044)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:144)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:69)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:196)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
>   at 
> org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:208)
>   at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}
> I digging into the issue,I found the code snippet in ViewFileSystem.java as 
> follows:
> {noformat}
>  @Override
> public FileStatus getFileStatus(Path f) throws IOException {
>   checkPathIsSlash(f);
>   return new FileStatus(0, true, 0, 0, creationTime, creationTime,
>   PERMISSION_555, ugi.getUserName(), ugi.getGroupNames()[0],
>   new Path(theInternalDir.fullPath).makeQualified(
>   myUri, ROOT_PATH));
> }
> {noformat}
> If the node in cluster  haven't creat user like 
> tangjijun,ugi.getGroupNames()[0]  will throw   
> ArrayIndexOutOfBoundsException.Because no user mean no group.
> I create user tangjijun on that node. Then the job was executed normally.  
> I think this code should check  ugi.getGroupNames() is empty.When it is empty 
> ,then print some log. Not to throw exception.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11262) Enable YARN to use S3A

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089423#comment-15089423
 ] 

Steve Loughran commented on HADOOP-11262:
-

actually, cut the copyright 2015 line entirely. It goes into NOTICE.TXT and is 
updated in one place only. thanks

> Enable YARN to use S3A 
> ---
>
> Key: HADOOP-11262
> URL: https://issues.apache.org/jira/browse/HADOOP-11262
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Thomas Demoor
>Assignee: Pieter Reuse
>  Labels: amazon, s3
> Attachments: HADOOP-11262-2.patch, HADOOP-11262-3.patch, 
> HADOOP-11262-4.patch, HADOOP-11262-5.patch, HADOOP-11262-6.patch, 
> HADOOP-11262-7.patch, HADOOP-11262-8.patch, HADOOP-11262.patch
>
>
> Uses DelegateToFileSystem to expose S3A as an AbstractFileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12253) ViewFileSystem getFileStatus java.lang.ArrayIndexOutOfBoundsException: 0

2016-01-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089491#comment-15089491
 ] 

Hadoop QA commented on HADOOP-12253:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 35s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
17s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 32s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 16s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 64, now 67). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 41s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 50s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 73m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12749752/HADOOP-12253.patch |
| JIRA Issue | HADOOP-12253 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  

[jira] [Commented] (HADOOP-12689) S3 filesystem operations stopped working correctly

2016-01-08 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089524#comment-15089524
 ] 

Ravi Prakash commented on HADOOP-12689:
---

Steve! Your -1 here is completely unwarranted.
1. I don't remember a resolution overriding the ability of committers to commit 
patches without tests. If all patches must contain tests, I'm happy to go about 
-1ing all the JIRAs which have been committed already without a patch.
2. I shouldn't have to point out the hypocrisy here. *YOU* yourself committed 
HADOOP-10542 without tests.

I'd ask you to reconsider your -1. If you insist, I'll roll the patch back.

As it were, Matt *IS* working on adding tests. So in this case "later" did NOT 
mean "never". Thanks


> S3 filesystem operations stopped working correctly
> --
>
> Key: HADOOP-12689
> URL: https://issues.apache.org/jira/browse/HADOOP-12689
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Matthew Paduano
>Assignee: Matthew Paduano
>  Labels: S3
> Fix For: 2.8.0
>
> Attachments: HADOOP-12689.01.patch
>
>
> HADOOP-10542 was resolved by replacing "return null;" with throwing  
> IOException.   This causes several S3 filesystem operations to fail (possibly 
> more code is expecting that null return value; these are just the calls I 
> noticed):
> S3FileSystem.getFileStatus() (which no longer raises FileNotFoundException 
> but instead IOException)
> FileSystem.exists() (which no longer returns false but instead raises 
> IOException)
> S3FileSystem.create() (which no longer succeeds but instead raises 
> IOException)
> Run command:
> hadoop distcp hdfs://localhost:9000/test s3://xxx:y...@com.bar.foo/
> Resulting stack trace:
> 2015-12-11 10:04:34,030 FATAL [IPC Server handler 6 on 44861] 
> org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: 
> attempt_1449826461866_0005_m_06_0 - exited : java.io.IOException: /test 
> doesn't exist
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.get(Jets3tFileSystemStore.java:170)
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.retrieveINode(Jets3tFileSystemStore.java:221)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at com.sun.proxy.$Proxy17.retrieveINode(Unknown Source)
> at org.apache.hadoop.fs.s3.S3FileSystem.getFileStatus(S3FileSystem.java:340)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:230)
> at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:50)
> at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
> at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:415)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
> at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
> changing the "raise IOE..." to "return null" fixes all of the above code 
> sites and allows distcp to succeed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9844) NPE when trying to create an error message response of RPC

2016-01-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089533#comment-15089533
 ] 

Steve Loughran commented on HADOOP-9844:


Reviewing the stack traces and causes, {{setupResponse()}} assumes that on an 
RPC failure it has a non-null message. {{ doSaslReply(Message message)}} 
doesn't always do this .. it should set things up. There's also another place 
calling IOE.getMessage() for the message, something that should fall back to 
IOE.toString().

I'll do that and add a final sanity check in {{setupResponse()}} which sets the 
error to "error" if nothing else. 

> NPE when trying to create an error message response of RPC
> --
>
> Key: HADOOP-9844
> URL: https://issues.apache.org/jira/browse/HADOOP-9844
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-9844-001.patch
>
>
> I'm seeing an NPE which is raised when the server is trying to create an 
> error response to send back to the caller and there is no error text.
> The root cause is probably somewhere in SASL, but sending something back to 
> the caller would seem preferable to NPE-ing server-side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11875) [JDK8] Renaming _ as a one-character identifier to another identifier

2016-01-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11875?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15089531#comment-15089531
 ] 

Akira AJISAKA commented on HADOOP-11875:


bq. Java8 merely warns about it.
FYI: Java9 forbids it. 
https://blogs.oracle.com/sundararajan/entry/underscore_is_a_keyword_in

> [JDK8] Renaming _ as a one-character identifier to another identifier
> -
>
> Key: HADOOP-11875
> URL: https://issues.apache.org/jira/browse/HADOOP-11875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>  Labels: newbie
> Attachments: build_error_dump.txt
>
>
> From JDK8, _ as a one-character identifier is disallowed. Currently Web UI 
> uses it. We should fix them to compile with JDK8. 
> https://bugs.openjdk.java.net/browse/JDK-8061549



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)