[jira] [Commented] (HADOOP-12545) Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider and DistCp

2015-11-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003768#comment-15003768
 ] 

Akira AJISAKA commented on HADOOP-12545:


+1

> Hadoop Javadoc has broken link for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider and DistCp
> 
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12569) ZKFC should stop namenode before itself quit in some circumstances

2015-11-13 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-12569:


 Summary: ZKFC should stop namenode before itself quit in some 
circumstances
 Key: HADOOP-12569
 URL: https://issues.apache.org/jira/browse/HADOOP-12569
 Project: Hadoop Common
  Issue Type: Bug
  Components: ha
Affects Versions: 2.6.0
Reporter: Tao Jie


We have met such a HA scenario:
NN1(active) and zkfc1 on node1;
NN2(standby) and zkfc2 on node2.
1,Stop network on node1, NN2 becomes active. On node2, zkfc2 kills itself since 
it cannot connect to zookeeper, but leaving NN1 still running.
2,Several minutes later, network on node1 recovers. NN1 is running but out of 
control. NN1 and NN2 both run as active nn.
Maybe zkfc should stop nn before quit in such circumstances.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004227#comment-15004227
 ] 

Hudson commented on HADOOP-12545:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #665 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/665/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004132#comment-15004132
 ] 

Hudson commented on HADOOP-12545:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1401 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1401/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004103#comment-15004103
 ] 

Hudson commented on HADOOP-12545:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #604 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/604/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004181#comment-15004181
 ] 

Hudson commented on HADOOP-12545:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk #2606 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2606/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10787:
--
Fix Version/s: 3.0.0

> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Fix For: 3.0.0
>
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch, HADOOP-10787.03.patch, HADOOP-10787.04.patch, 
> HADOOP-10787.05.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9261) S3n filesystem can move a directory under itself -and so lose data

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004498#comment-15004498
 ] 

Allen Wittenauer commented on HADOOP-9261:
--

Moved from the release note field:

fixed in HADOOP-9258

> S3n filesystem can move a directory under itself -and so lose data
> --
>
> Key: HADOOP-9261
> URL: https://issues.apache.org/jira/browse/HADOOP-9261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 1.1.1, 2.0.2-alpha
> Environment: Testing against S3 bucket stored on US West (Read after 
> Write consistency; eventual for read-after-delete or write-after-write)
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0
>
> Attachments: HADOOP-9261-2.patch, HADOOP-9261.patch
>
>
> The S3N filesystem {{rename()}} doesn't make sure that the destination 
> directory is not a child or other descendant of the source directory. The 
> files are copied to the new destination, then the source directory is 
> recursively deleted, so losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9261) S3n filesystem can move a directory under itself -and so lose data

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9261:
-
Release Note:   (was: fixed in HADOOP-9258)

> S3n filesystem can move a directory under itself -and so lose data
> --
>
> Key: HADOOP-9261
> URL: https://issues.apache.org/jira/browse/HADOOP-9261
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 1.1.1, 2.0.2-alpha
> Environment: Testing against S3 bucket stored on US West (Read after 
> Write consistency; eventual for read-after-delete or write-after-write)
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0
>
> Attachments: HADOOP-9261-2.patch, HADOOP-9261.patch
>
>
> The S3N filesystem {{rename()}} doesn't make sure that the destination 
> directory is not a child or other descendant of the source directory. The 
> files are copied to the new destination, then the source directory is 
> recursively deleted, so losing data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12192:
--
Summary: update releasedocmaker commands  (was: update releasedocmaker 
commands for yetus)

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use Yetus, then the 
> pom.xml that runs releasedocmaker will need to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004386#comment-15004386
 ] 

Allen Wittenauer commented on HADOOP-12192:
---

It was one of them that got merged from the Yetus branch.

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10787) Rename/remove non-HADOOP_*, etc from the shell scripts

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10787:
--
Release Note: 

The following shell environment variables have been deprecated:

| Old | New |
|: |: |
| DEFAULT_LIBEXEC_DIR | HADOOP_DEFAULT_LIBEXEC_DIR |
| SLAVE_NAMES | HADOOP_SLAVE_NAMES |
| TOOL_PATH | HADOOP_TOOLS_PATH |

In addition:
* DEFAULT_LIBEXEC_DIR will NOT be automatically transitioned to 
HADOOP_DEFAULT_LIBEXEC_DIR and will require changes to any scripts setting that 
value.  A warning will be printed to the screen if DEFAULT_LIBEXEC_DIR has been 
configured.
* HADOOP_TOOLS_PATH is now properly handled as a multi-valued, Java 
classpath-style variable.  Prior, multiple values assigned to TOOL_PATH would 
not work a predictable manner.


> Rename/remove non-HADOOP_*, etc from the shell scripts
> --
>
> Key: HADOOP-10787
> URL: https://issues.apache.org/jira/browse/HADOOP-10787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
>  Labels: scripts
> Fix For: 3.0.0
>
> Attachments: HADOOP-10787.00.patch, HADOOP-10787.01.patch, 
> HADOOP-10787.02.patch, HADOOP-10787.03.patch, HADOOP-10787.04.patch, 
> HADOOP-10787.05.patch
>
>
> We should make an effort to clean up the shell env var name space by removing 
> unsafe variables.  See comments for list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004508#comment-15004508
 ] 

Allen Wittenauer commented on HADOOP-12192:
---

Not yet.  I'd rather do that when we have a release since I'll also nuke the 
other parts out of dev-support. But applying this patch and putting escapes in 
the one release note (HDFS-9278) that broke mvn site at least lets things build 
again.

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12192:
--
Description: If HADOOP-12135 gets committed and Hadoop switches to use 
Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need to 
get updated as well.  (was: If HADOOP-12135 gets committed and Hadoop switches 
to use Yetus, then the pom.xml that runs releasedocmaker will need to get 
updated as well.)

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Critical
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12192:
--
Priority: Blocker  (was: Critical)

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004336#comment-15004336
 ] 

Hudson commented on HADOOP-12545:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2541 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2541/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004381#comment-15004381
 ] 

Andrew Wang commented on HADOOP-12192:
--

Do you know which commit this was off-hand? Else I can dig.

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004502#comment-15004502
 ] 

Andrew Wang commented on HADOOP-12192:
--

I'm cool with that. My attempt at "fixing" would be to apply all the changes 
from Yetus.

Do we have a Yetus release yet? If not, could also pin to a given git hash.

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004367#comment-15004367
 ] 

Allen Wittenauer commented on HADOOP-12192:
---

ping [~andrew.wang], since it looks like he did the commit that broke 
-Preleasedocs .

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004477#comment-15004477
 ] 

Allen Wittenauer commented on HADOOP-12192:
---

I'm tempted to replace this with a wrapper to download releasedocmaker from 
yetus, since the one in trunk has other bugs. :(

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9265) S3 blockstore filesystem breaks part of the Filesystem contract

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9265:
-
Release Note:   (was: fixed in HADOOP-9258)

> S3 blockstore filesystem breaks part of the Filesystem contract
> ---
>
> Key: HADOOP-9265
> URL: https://issues.apache.org/jira/browse/HADOOP-9265
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 1.1.1, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0
>
> Attachments: HADOOP-9265.patch, HADOOP-9265.patch
>
>
> The extended tests of HADOOP-9258 show that s3 is failing things which we 
> always expected an FS to do
> # {{getFileStatus("/")}} to return a {{FileStatus}} -currently it returns a 
> {{FileNotFoundException}}.
> # {{rename("somedir","somedir/childdir")}} to fail. currently it returns true 
> after deleting all the data in {{somedir/}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9265) S3 blockstore filesystem breaks part of the Filesystem contract

2015-11-13 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004499#comment-15004499
 ] 

Allen Wittenauer commented on HADOOP-9265:
--

Removed from the release note field:
fixed in HADOOP-9258

> S3 blockstore filesystem breaks part of the Filesystem contract
> ---
>
> Key: HADOOP-9265
> URL: https://issues.apache.org/jira/browse/HADOOP-9265
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 1.1.1, 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 3.0.0
>
> Attachments: HADOOP-9265.patch, HADOOP-9265.patch
>
>
> The extended tests of HADOOP-9258 show that s3 is failing things which we 
> always expected an FS to do
> # {{getFileStatus("/")}} to return a {{FileStatus}} -currently it returns a 
> {{FileNotFoundException}}.
> # {{rename("somedir","somedir/childdir")}} to fail. currently it returns true 
> after deleting all the data in {{somedir/}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11625) Minor fixes to command manual & SLA doc

2015-11-13 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11625:
--
Release Note:   (was: Just some minor printography fixes.)

> Minor fixes to command manual & SLA doc
> ---
>
> Key: HADOOP-11625
> URL: https://issues.apache.org/jira/browse/HADOOP-11625
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Fix For: 3.0.0
>
> Attachments: HADOOP-11625-00.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12568) Update core-default.xml to describe posixGroups support

2015-11-13 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004831#comment-15004831
 ] 

Daniel Templeton commented on HADOOP-12568:
---

Language edits:

{noformat}
If the LDAP server supports posixGroups, Hadoop can enable the feature by
setting the value of this property to "posixAccount" and the value of the
hadoop.security.group.mapping.ldap.search.filter.group property to "posixGroup".
{noformat}

> Update core-default.xml to describe posixGroups support
> ---
>
> Key: HADOOP-12568
> URL: https://issues.apache.org/jira/browse/HADOOP-12568
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: group, mappings, supportability
> Attachments: HADOOP-12568.001.patch
>
>
> After HADOOP-9477, LdapGroupsMapping supports posixGroups mapping service. 
> However, core-default.xml was not updated to detail how to configure in order 
> to enable this feature. This JIRA is filed to describe how to enable 
> posixGroups for users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10584) ActiveStandbyElector goes down if ZK quorum become unavailable

2015-11-13 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004828#comment-15004828
 ] 

Karthik Kambatla commented on HADOOP-10584:
---

Based on my recollection from a while ago and briefly looking at the attached 
prelim patch, there are a couple of issues here:
# When RM loses connection while executing an operation, the operation just 
fails without enough retries. The patch adds a retry-loop to handle this.
# When RM loses connection to ZK but doesn't give up being Active. This leads 
to the RM continuing to serve apps and nodes connected to it. The patch, in 
addition to rejoining election, has the client (ZKFC/RM) enter neutral mode. 
Today, the RM doesn't do anything on {{enterNeutralMode}} but of course this 
can be improved going forward. 

I won't be able to work on this for the next month or so. If anyone has cycles, 
please feel free to take it up. 

> ActiveStandbyElector goes down if ZK quorum become unavailable
> --
>
> Key: HADOOP-10584
> URL: https://issues.apache.org/jira/browse/HADOOP-10584
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ha
>Affects Versions: 2.4.0
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Critical
> Attachments: hadoop-10584-prelim.patch, rm.log
>
>
> ActiveStandbyElector retries operations for a few times. If the ZK quorum 
> itself is down, it goes down and the daemons will have to be brought up 
> again. 
> Instead, it should log the fact that it is unable to talk to ZK, call 
> becomeStandby on its client, and continue to attempt connecting to ZK.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12568) Update core-default.xml to describe posixGroups support

2015-11-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12568:
-
Attachment: HADOOP-12568.002.patch

Thanks [~templedf] for the language correction. Here's the rev2 based on your 
comments.

> Update core-default.xml to describe posixGroups support
> ---
>
> Key: HADOOP-12568
> URL: https://issues.apache.org/jira/browse/HADOOP-12568
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: group, mappings, supportability
> Attachments: HADOOP-12568.001.patch, HADOOP-12568.002.patch
>
>
> After HADOOP-9477, LdapGroupsMapping supports posixGroups mapping service. 
> However, core-default.xml was not updated to detail how to configure in order 
> to enable this feature. This JIRA is filed to describe how to enable 
> posixGroups for users.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12374:
---
Fix Version/s: 2.8.0

> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-11-13 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005048#comment-15005048
 ] 

Konstantin Boudnik commented on HADOOP-12415:
-

I think we're good here. Sorry, I dropped the ball - should've been committing 
this a while ago. Will do over the weekend.

> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin Boudnik
>Assignee: Tom Zeng
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005095#comment-15005095
 ] 

Hadoop QA commented on HADOOP-12570:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
46s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 55s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
35s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 19s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 44s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 78m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 13s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 3s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 225m 29s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.metrics2.impl.TestMetricsSystemImpl 
|
|   | hadoop.ipc.TestRPCWaitForProxy |
|   | hadoop.hdfs.TestReadStripedFileWithMissingBlocks |
|   | 

[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005123#comment-15005123
 ] 

Hadoop QA commented on HADOOP-12570:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 26s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 58s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 51m 47s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 52m 53s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 20s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 176m 39s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_60 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
|   | hadoop.hdfs.server.namenode.ha.TestDNFencing |
|   | hadoop.hdfs.server.blockmanagement.TestNodeCount |
|   | 

[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005149#comment-15005149
 ] 

Hudson commented on HADOOP-12374:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8803 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8803/])
Move HADOOP-12374 from 2.8.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
47c79a2a4d265feff7bd997bf07473eeb74c1c4b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-11-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-12570:
---
Issue Type: Improvement  (was: Bug)

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12568) Update core-default.xml to describe posixGroups support

2015-11-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004945#comment-15004945
 ] 

Hadoop QA commented on HADOOP-12568:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
25s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 27s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 38s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 42s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 45m 9s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-11-13 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12772326/HADOOP-12568.002.patch
 |
| JIRA Issue | HADOOP-12568 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux aecfe2521db7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-fa12328/precommit/personality/hadoop.sh
 |
| git revision | trunk / f94d892 |
| unit | 

[jira] [Moved] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-11-13 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal moved HDFS-9254 to HADOOP-12570:
--

Affects Version/s: (was: 2.7.1)
   2.7.1
 Target Version/s: 2.8.0  (was: 2.8.0)
  Component/s: (was: documentation)
   documentation
  Key: HADOOP-12570  (was: HDFS-9254)
  Project: Hadoop Common  (was: Hadoop HDFS)

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-11-13 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005146#comment-15005146
 ] 

Arpit Agarwal commented on HADOOP-12570:


The failures look unrelated to the patch.

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12374:
---
Fix Version/s: 2.7.3

Cherry-picked this to branch-2.7 because this issue is to fix only documents 
and not likely to cause bugs.

> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12348:

Fix Version/s: 2.7.3

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12482:

Fix Version/s: 2.7.3

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005177#comment-15005177
 ] 

Tsuyoshi Ozawa commented on HADOOP-12348:
-

Cherrypicked this to branch-2.7.

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11361:

Fix Version/s: 2.7.3

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005204#comment-15005204
 ] 

Hudson commented on HADOOP-12374:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1402 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1402/])
Move HADOOP-12374 from 2.8.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
47c79a2a4d265feff7bd997bf07473eeb74c1c4b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12564) Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005232#comment-15005232
 ] 

Tsuyoshi Ozawa commented on HADOOP-12564:
-

[~cote] Thank you for updating. We're almost there. In addition to Akira's 
comment, please check following comments:

{code:title=TestSequenceFile.java|}
   @Test
  public void testCreateWriterOnExistingFile() throws IOException {
{code}
Please fix indentation(Please remote single whitespace before Test annotation).

{code:titile=TestVLong.java|}
  @Test
  public void testVLong6Bytes() throws IOException {
verifySixOrMoreBytes(6);
  }
  @Test
  public void testVLong7Bytes() throws IOException {
verifySixOrMoreBytes(7);
  }
  @Test
  public void testVLong8Bytes() throws IOException {
verifySixOrMoreBytes(8);
  }
  @Test
  public void testVLongRandom() throws IOException {
{code}
Please add a line break between each test cases because of consistent coding 
style.

{code:title=TestTFileSplit.java}
import org.junit.After;
import org.junit.Before;
{code}

{code:title=TestTFileSeqFileComparison.java}
import static org.junit.Assert.assertEquals;
import static org.junit.Assert.assertTrue;
import static org.junit.Assert.assertFalse;
{code}

Please remove unused imports.



> Upgrade JUnit3 TestCase to JUnit 4 for tests of org.apache.hadoop.io package
> 
>
> Key: HADOOP-12564
> URL: https://issues.apache.org/jira/browse/HADOOP-12564
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Reporter: Dustin Cote
>Assignee: Dustin Cote
>Priority: Trivial
> Attachments: MAPREDUCE-6505-1.patch, MAPREDUCE-6505-2.patch, 
> MAPREDUCE-6505-3.patch, MAPREDUCE-6505-4.patch, MAPREDUCE-6505-5.patch
>
>
> Migrating just the io test cases 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005172#comment-15005172
 ] 

Hudson commented on HADOOP-12374:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2607 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2607/])
Move HADOOP-12374 from 2.8.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
47c79a2a4d265feff7bd997bf07473eeb74c1c4b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-12571:
--

 Summary: [JDK8] Remove XX:MaxPermSize setting from pom.xml
 Key: HADOOP-12571
 URL: https://issues.apache.org/jira/browse/HADOOP-12571
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA
Priority: Minor


{code:title=hadoop-project/pom.xml}
-Xmx2048m -XX:MaxPermSize=768m 
-XX:+HeapDumpOnOutOfMemoryError
{code}
{{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11858) [JDK8] Set minimum version of Hadoop 3 to JDK 8

2015-11-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11858?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005189#comment-15005189
 ] 

Akira AJISAKA commented on HADOOP-11858:


Can we remove {{-XX:MaxPermSize}} setting from pom.xml in this issue? The 
option is not supported in JDK8. Please see HADOOP-12571 for the detail.

> [JDK8] Set minimum version of Hadoop 3 to JDK 8
> ---
>
> Key: HADOOP-11858
> URL: https://issues.apache.org/jira/browse/HADOOP-11858
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Robert Kanter
>Assignee: Robert Kanter
> Attachments: HADOOP-11858.001.patch, HADOOP-11858.002.patch
>
>
> Set minimum version of trunk to JDK 8



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-11-13 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11771:
-
Description: 
{code}
 IPC Client (1970436060) connection to 
cn106-10.l42scl.hortonworks.com/172.21.128.106:34530 from 
application_1442254312093_2976 [BLOCKED] [DAEMON]
org.apache.hadoop.conf.Configuration.getClassByNameOrNull(String) 
Configuration.java:2117
org.apache.hadoop.conf.Configuration.getClassByName(String) 
Configuration.java:2099
org.apache.hadoop.io.ObjectWritable.loadClass(Configuration, String) 
ObjectWritable.java:373
org.apache.hadoop.io.ObjectWritable.readObject(DataInput, ObjectWritable, 
Configuration) ObjectWritable.java:282
org.apache.hadoop.io.ObjectWritable.readFields(DataInput) ObjectWritable.java:77
org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse() Client.java:1098
org.apache.hadoop.ipc.Client$Connection.run() Client.java:977
{code}


{code}
  private static final Map>>
CACHE_CLASSES = new WeakHashMap>>();

...
 synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}

!configuration-sync-cache.png!

!configuration-cache-bt.png!

  was:
{code}
  private static final Map>>
CACHE_CLASSES = new WeakHashMap>>();

...
 synchronized (CACHE_CLASSES) {
  map = CACHE_CLASSES.get(classLoader);
  if (map == null) {
map = Collections.synchronizedMap(
  new WeakHashMap>());
CACHE_CLASSES.put(classLoader, map);
  }
}
{code}

!configuration-sync-cache.png!

!configuration-cache-bt.png!


> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
>Priority: Critical
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>  IPC Client (1970436060) connection to 
> cn106-10.l42scl.hortonworks.com/172.21.128.106:34530 from 
> application_1442254312093_2976 [BLOCKED] [DAEMON]
> org.apache.hadoop.conf.Configuration.getClassByNameOrNull(String) 
> Configuration.java:2117
> org.apache.hadoop.conf.Configuration.getClassByName(String) 
> Configuration.java:2099
> org.apache.hadoop.io.ObjectWritable.loadClass(Configuration, String) 
> ObjectWritable.java:373
> org.apache.hadoop.io.ObjectWritable.readObject(DataInput, ObjectWritable, 
> Configuration) ObjectWritable.java:282
> org.apache.hadoop.io.ObjectWritable.readFields(DataInput) 
> ObjectWritable.java:77
> org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse() Client.java:1098
> org.apache.hadoop.ipc.Client$Connection.run() Client.java:977
> {code}
> {code}
>   private static final Map>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!
> !configuration-cache-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005179#comment-15005179
 ] 

Tsuyoshi Ozawa commented on HADOOP-12482:
-

Cherrypicked this to branch-2.7 with HADOOP-12348 and HADOOP-11361.

> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005180#comment-15005180
 ] 

Tsuyoshi Ozawa commented on HADOOP-11361:
-

Cherrypicked this to branch-2.7 for HADOOP-12482. 

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005183#comment-15005183
 ] 

Tsuyoshi Ozawa commented on HADOOP-12571:
-

Can we merge this change with HADOOP-11858?

> [JDK8] Remove XX:MaxPermSize setting from pom.xml
> -
>
> Key: HADOOP-12571
> URL: https://issues.apache.org/jira/browse/HADOOP-12571
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> {code:title=hadoop-project/pom.xml}
> -Xmx2048m -XX:MaxPermSize=768m 
> -XX:+HeapDumpOnOutOfMemoryError
> {code}
> {{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
> dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005192#comment-15005192
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8804/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005193#comment-15005193
 ] 

Hudson commented on HADOOP-12482:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8804/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005194#comment-15005194
 ] 

Hudson commented on HADOOP-11361:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8804 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8804/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-12571.

Resolution: Duplicate

Closing this.

> [JDK8] Remove XX:MaxPermSize setting from pom.xml
> -
>
> Key: HADOOP-12571
> URL: https://issues.apache.org/jira/browse/HADOOP-12571
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> {code:title=hadoop-project/pom.xml}
> -Xmx2048m -XX:MaxPermSize=768m 
> -XX:+HeapDumpOnOutOfMemoryError
> {code}
> {{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
> dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005198#comment-15005198
 ] 

Hudson commented on HADOOP-12374:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #678 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/678/])
Move HADOOP-12374 from 2.8.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
47c79a2a4d265feff7bd997bf07473eeb74c1c4b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005229#comment-15005229
 ] 

Hudson commented on HADOOP-11361:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2608 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2608/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12482) Race condition in JMX cache update

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005228#comment-15005228
 ] 

Hudson commented on HADOOP-12482:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2608 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2608/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Race condition in JMX cache update
> --
>
> Key: HADOOP-12482
> URL: https://issues.apache.org/jira/browse/HADOOP-12482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Tony Wu
>Assignee: Tony Wu
> Fix For: 2.8.0, 3.0.0, 2.7.3
>
> Attachments: HADOOP-12482.001.patch, HADOOP-12482.002.patch, 
> HADOOP-12482.003.patch, HADOOP-12482.004.patch, HADOOP-12482.005.patch, 
> HADOOP-12482.006.patch
>
>
> updateJmxCache() was updated in HADOOP-11301. However the patch introduced a 
> race condition. In updateJmxCache() function in MetricsSourceAdapter.java:
> {code:java}
>   private void updateJmxCache() {
> boolean getAllMetrics = false;
> synchronized (this) {
>   if (Time.now() - jmxCacheTS >= jmxCacheTTL) {
> // temporarilly advance the expiry while updating the cache
> jmxCacheTS = Time.now() + jmxCacheTTL;
> if (lastRecs == null) {
>   getAllMetrics = true;
> }
>   } else {
> return;
>   }
>   if (getAllMetrics) {
> MetricsCollectorImpl builder = new MetricsCollectorImpl();
> getMetrics(builder, true);
>   }
>   updateAttrCache();
>   if (getAllMetrics) {
> updateInfoCache();
>   }
>   jmxCacheTS = Time.now();
>   lastRecs = null; // in case regular interval update is not running
> }
>   }
> {code}
> Notice that getAllMetrics is set to true when:
> # jmxCacheTTL has passed
> # lastRecs == null
> lastRecs is set to null in the same function, but gets reassigned by 
> getMetrics().
> However getMetrics() can be called from a different thread:
> # MetricsSystemImpl.onTimerEvent()
> # MetricsSystemImpl.publishMetricsNow()
> Consider the following sequence:
> # updateJmxCache() is called by getMBeanInfo() from a thread getting cached 
> info. 
> ** lastRecs is set to null.
> # metrics sources is updated with new value/field.
> # getMetrics() is called by publishMetricsNow() or onTimerEvent() from a 
> different thread getting the latest metrics. 
> ** lastRecs is updated (!= null).
> # jmxCacheTTL passed.
> # updateJmxCache() is called again via getMBeanInfo().
> ** However because lastRecs is already updated (!= null), getAllMetrics will 
> not be set to true. So updateInfoCache() is not called and getMBeanInfo() 
> returns the old cached info.
> We ran into this issue on a cluster where a new metric did not get published 
> until much later.
> The case can be made worse by a periodic call to getMetrics() (driven by an 
> external program or script). In such case getMBeanInfo() may never be able to 
> retrieve the new record.
> The desired behavior should be that updateJmxCache() will guarantee to call 
> updateInfoCache() once after jmxCacheTTL, if lastRecs has been set to null by 
> updateJmxCache() itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005227#comment-15005227
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2608 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2608/])
Move HADOOP-11361, HADOOP-12348 and HADOOP-12482 from 2.8.0 to 2.7.3 in (ozawa: 
rev c753617a48bffed491b9ca7a5ca6b3d2df5721bf)
* hadoop-common-project/hadoop-common/CHANGES.txt


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12374) Description of hdfs expunge command is confusing

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005231#comment-15005231
 ] 

Hudson commented on HADOOP-12374:
-

SUCCESS: Integrated in Hadoop-Mapreduce-trunk-Java8 #666 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/666/])
Move HADOOP-12374 from 2.8.0 to 2.7.3 in CHANGES.txt. (aajisaka: rev 
47c79a2a4d265feff7bd997bf07473eeb74c1c4b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Description of hdfs expunge command is confusing
> 
>
> Key: HADOOP-12374
> URL: https://issues.apache.org/jira/browse/HADOOP-12374
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, trash
>Affects Versions: 2.7.0, 2.7.1
>Reporter: Weiwei Yang
>Assignee: Weiwei Yang
>  Labels: docuentation, newbie, suggestions, trash
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12374.001.patch, HADOOP-12374.002.patch, 
> HADOOP-12374.003.patch, HADOOP-12374.004.patch
>
>
> Usage: hadoop fs -expunge
> Empty the Trash. Refer to the HDFS Architecture Guide for more information on 
> the Trash feature.
> this description is confusing. It gives user the impression that this command 
> will empty trash, but actually it only removes old checkpoints. If user sets 
> a pretty long value for fs.trash.interval, this command will not remove 
> anything until checkpoints exist longer than this value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12571) [JDK8] Remove XX:MaxPermSize setting from pom.xml

2015-11-13 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15005186#comment-15005186
 ] 

Akira AJISAKA commented on HADOOP-12571:


Sounds good. Closing this issue and add a comment to HADOOP-11858.

> [JDK8] Remove XX:MaxPermSize setting from pom.xml
> -
>
> Key: HADOOP-12571
> URL: https://issues.apache.org/jira/browse/HADOOP-12571
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Akira AJISAKA
>Priority: Minor
>
> {code:title=hadoop-project/pom.xml}
> -Xmx2048m -XX:MaxPermSize=768m 
> -XX:+HeapDumpOnOutOfMemoryError
> {code}
> {{-XX:MaxPermSize}} is not supported in JDK8. It should be removed after 
> dropping support of JDK7.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11771) Configuration::getClassByNameOrNull synchronizes on a static object

2015-11-13 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11771:
-
Priority: Critical  (was: Major)

> Configuration::getClassByNameOrNull synchronizes on a static object
> ---
>
> Key: HADOOP-11771
> URL: https://issues.apache.org/jira/browse/HADOOP-11771
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: conf, io, ipc
>Reporter: Gopal V
>Priority: Critical
> Attachments: configuration-cache-bt.png, configuration-sync-cache.png
>
>
> {code}
>   private static final Map>>
> CACHE_CLASSES = new WeakHashMap WeakReference>>();
> ...
>  synchronized (CACHE_CLASSES) {
>   map = CACHE_CLASSES.get(classLoader);
>   if (map == null) {
> map = Collections.synchronizedMap(
>   new WeakHashMap>());
> CACHE_CLASSES.put(classLoader, map);
>   }
> }
> {code}
> !configuration-sync-cache.png!
> !configuration-cache-bt.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider and DistCp

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12545:
---
Hadoop Flags: Reviewed
 Summary: Hadoop javadoc has broken links for AccessControlList, 
ImpersonationProvider, DefaultImpersonationProvider and DistCp  (was: Hadoop 
Javadoc has broken link for AccessControlList, ImpersonationProvider, 
DefaultImpersonationProvider and DistCp)

> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider and DistCp
> -
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15004005#comment-15004005
 ] 

Hudson commented on HADOOP-12545:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #677 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/677/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12545:
---
   Resolution: Fixed
Fix Version/s: 2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2, and branch-2.7. Thanks [~arshad.mohammad] 
for the contribution.

> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12545:
---
Summary: Hadoop javadoc has broken links for AccessControlList, 
ImpersonationProvider, DefaultImpersonationProvider, and DistCp  (was: Hadoop 
javadoc has broken links for AccessControlList, ImpersonationProvider, 
DefaultImpersonationProvider and DistCp)

> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12545:
---
Attachment: HADOOP-12545.branch-2.7.patch

Attaching the patch used for branch-2.7.

> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12545) Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, DefaultImpersonationProvider, and DistCp

2015-11-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15003988#comment-15003988
 ] 

Hudson commented on HADOOP-12545:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8802 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8802/])
HADOOP-12545. Hadoop javadoc has broken links for AccessControlList, (aajisaka: 
rev f94d89270464ea8e0d19e26e425835cd6a5dc5de)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/tools/package-info.java


> Hadoop javadoc has broken links for AccessControlList, ImpersonationProvider, 
> DefaultImpersonationProvider, and DistCp
> --
>
> Key: HADOOP-12545
> URL: https://issues.apache.org/jira/browse/HADOOP-12545
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arshad Mohammad
>Assignee: Arshad Mohammad
> Fix For: 2.8.0, 2.7.3
>
> Attachments: HADOOP-12545-01.patch, HADOOP-12545-02.patch, 
> HADOOP-12545-03.patch, HADOOP-12545.branch-2.7.patch
>
>
> 1) open hadoop-2.7.1\share\doc\hadoop\api\index.html
> 2) Click on "All Classes"
> 3) Click on "AccessControlList", The page shows "This page can’t be displayed"
> Same error for DistCp, ImpersonationProvider and DefaultImpersonationProvider 
> also.
> Javadoc generated from Trunk has the same problem



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)