[jira] [Updated] (HADOOP-14904) Fix javadocs issues in Hadoop HDFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HADOOP-14904:
---
Status: Patch Available  (was: Open)

> Fix javadocs issues in Hadoop HDFS 
> ---
>
> Key: HADOOP-14904
> URL: https://issues.apache.org/jira/browse/HADOOP-14904
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HADOOP-14904.001.patch
>
>
> Fix the following javadocs issue in Hadoop HDFS 
> 1445 [INFO] 
> 
>  1446 [INFO] Building Apache Hadoop HDFS 3.1.0-SNAPSHOT
>  1447 [INFO] 
> 
> {code}
>  1537 ExcludePrivateAnnotationsStandardDoclet
>  1538 9 warnings
>  1539 [WARNING] Javadoc Warnings
>  1540 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1541 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1542 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see  : reference not found: FSNamesystem
>  1543 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see  : reference not found: EditLog
>  1544 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1545 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1546 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java:82:
>  warning - Tag @link  : reference not found: DfsClientShmManager
>  1548 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java:1274:
>  warning - Tag @  link: reference not found: CallerRunsPolicy
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14904) Fix javadocs issues in Hadoop HDFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mukul Kumar Singh updated HADOOP-14904:
---
Attachment: HADOOP-14904.001.patch

> Fix javadocs issues in Hadoop HDFS 
> ---
>
> Key: HADOOP-14904
> URL: https://issues.apache.org/jira/browse/HADOOP-14904
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Mukul Kumar Singh
>Assignee: Mukul Kumar Singh
>Priority: Minor
> Attachments: HADOOP-14904.001.patch
>
>
> Fix the following javadocs issue in Hadoop HDFS 
> 1445 [INFO] 
> 
>  1446 [INFO] Building Apache Hadoop HDFS 3.1.0-SNAPSHOT
>  1447 [INFO] 
> 
> {code}
>  1537 ExcludePrivateAnnotationsStandardDoclet
>  1538 9 warnings
>  1539 [WARNING] Javadoc Warnings
>  1540 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1541 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1542 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see  : reference not found: FSNamesystem
>  1543 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see  : reference not found: EditLog
>  1544 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1545 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
>  warning - Tag @see   cannot be used in inline documentation.  It can 
> only be used in the following types of documentation: overview, package, 
> class/interface, constructor, field, method.
>  1546 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java:82:
>  warning - Tag @link  : reference not found: DfsClientShmManager
>  1548 [WARNING] 
> /Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java:1274:
>  warning - Tag @  link: reference not found: CallerRunsPolicy
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14905) Fix javadocs issues in Hadoop HDFS-NFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HADOOP-14905:
--

 Summary: Fix javadocs issues in Hadoop HDFS-NFS
 Key: HADOOP-14905
 URL: https://issues.apache.org/jira/browse/HADOOP-14905
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh


Fix the following javadoc issue in Apache Hadoop HDFS-NFS

{code}
 2266 5 warnings
 2267 [WARNING] Javadoc Warnings
 2268 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:92:
 warning: no @param for childNum
 2269 [WARNING] public static long getDirSize(int childNum) {
 2270 [WARNING] ^
 2271 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:92:
 warning: no @return
 2272 [WARNING] public static long getDirSize(int childNum) {
 2273 [WARNING] ^
 2274 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for channel
 2275 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2276 [WARNING] ^
 2277 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for out
 2278 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2279 [WARNING] ^
 2280 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs-nfs/src/main/java/org/apache/hadoop/hdfs/nfs/nfs3/Nfs3Utils.java:126:
 warning: no @param for xid
 2281 [WARNING] public static void writeChannel(Channel channel, XDR out, int 
xid) {
 2282 [WARNING] ^
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14904) Fix javadocs issues in Hadoop HDFS

2017-09-22 Thread Mukul Kumar Singh (JIRA)
Mukul Kumar Singh created HADOOP-14904:
--

 Summary: Fix javadocs issues in Hadoop HDFS 
 Key: HADOOP-14904
 URL: https://issues.apache.org/jira/browse/HADOOP-14904
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mukul Kumar Singh
Assignee: Mukul Kumar Singh
Priority: Minor


Fix the following javadocs issue in Hadoop HDFS 

1445 [INFO] 

 1446 [INFO] Building Apache Hadoop HDFS 3.1.0-SNAPSHOT
 1447 [INFO] 



{code}
 1537 ExcludePrivateAnnotationsStandardDoclet
 1538 9 warnings
 1539 [WARNING] Javadoc Warnings
 1540 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1541 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1542 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see  : reference not found: FSNamesystem
 1543 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see  : reference not found: EditLog
 1544 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1545 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockIdManager.java:37:
 warning - Tag @see   cannot be used in inline documentation.  It can only 
be used in the following types of documentation: overview, package, 
class/interface, constructor, field, method.
 1546 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/ShortCircuitRegistry.java:82:
 warning - Tag @link  : reference not found: DfsClientShmManager
 1548 [WARNING] 
/Users/msingh/code/work/apache/trunk/trunk2/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/checker/AbstractFuture.java:1274:
 warning - Tag @  link: reference not found: CallerRunsPolicy
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177412#comment-16177412
 ] 

Hadoop QA commented on HADOOP-13055:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
57s{color} | {color:green} root: The patch generated 0 new + 193 unchanged - 13 
fixed = 193 total (was 206) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 40s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 90m 30s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}183m 30s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.ha.TestZKFailoverController |
|   | hadoop.fs.shell.TestCopyFromLocal |
|   | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure |
|   | hadoop.hdfs.web.TestWebHdfsTimeouts |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-13055 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888584/HADOOP-13055.08.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cbbb33638504 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 164a063 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 

[jira] [Commented] (HADOOP-14220) Enhance S3GuardTool with bucket-info and set-capacity commands, tests

2017-09-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177400#comment-16177400
 ] 

Aaron Fabbri commented on HADOOP-14220:
---

Getting around to this again [~ste...@apache.org].. Did you intend to add back 
the findbugs-exclude.xml change that I removed (see my v13 patch above)?   We 
did get a clean yetus run on that.

Anyways I will try testing your v14 now.



> Enhance S3GuardTool with bucket-info and set-capacity commands, tests
> -
>
> Key: HADOOP-14220
> URL: https://issues.apache.org/jira/browse/HADOOP-14220
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-14220-006.patch, HADOOP-14220-008.patch, 
> HADOOP-14220-009.patch, HADOOP-14220-010.patch, HADOOP-14220-012.patch, 
> HADOOP-14220-013.patch, HADOOP-14220-013.patch, HADOOP-14220-014.patch, 
> HADOOP-14220-HADOOP-13345-001.patch, HADOOP-14220-HADOOP-13345-002.patch, 
> HADOOP-14220-HADOOP-13345-003.patch, HADOOP-14220-HADOOP-13345-004.patch, 
> HADOOP-14220-HADOOP-13345-005.patch
>
>
> Add a diagnostics command to s3guard which does whatever we need to diagnose 
> problems for a specific (named) s3a url. This is something which can be 
> attached to bug reports as well as used by developers.
> * Properties to log (with provenance attribute, which can track bucket 
> overrides: s3guard metastore setup, autocreate, capacity, 
> * table present/absent
> * # of keys in DDB table for that bucket?
> * any other stats?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177370#comment-16177370
 ] 

Hadoop QA commented on HADOOP-14901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
12s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  3m 
37s{color} | {color:red} root in branch-2 failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
23s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
37s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 37s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 52 unchanged - 1 fixed = 52 total (was 53) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
25s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
22s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 23s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:eaf5c66 |
| JIRA Issue | HADOOP-14901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888607/HADOOP-14901-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 12b4b4815f4a 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / de1d747 |
| Default Java | 1.7.0_151 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/branch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/branch-compile-root.txt
 |
| findbugs | v3.0.0 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/patch-mvninstall-hadoop-common-project_hadoop-common.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/patch-compile-root.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/patch-compile-root.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13366/artifact/patchprocess/patch-mvnsite-hadoop-common-project_hadoop-common.txt
 |

[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Attachment: HADOOP-14901-branch-2.001.patch

Thanks [~anu]. Fixed the typo.

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-branch-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Attachment: (was: HADOOP-14901-brnach-2.001.patch)

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177346#comment-16177346
 ] 

Anu Engineer commented on HADOOP-14901:
---

branch typo, it got applied to trunk

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14888) Use apidoc for REST API documentation

2017-09-22 Thread Eric Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177344#comment-16177344
 ] 

Eric Yang commented on HADOOP-14888:


Unit test failure is not introduced by this patch.

> Use apidoc for REST API documentation
> -
>
> Key: HADOOP-14888
> URL: https://issues.apache.org/jira/browse/HADOOP-14888
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: HADOOP-14888.001.patch
>
>
> There are more REST API being developed in Hadoop, and it would be great to 
> standardize on the method of generate REST API document.
> There are several method done today:
> Swagger YAML
> Javadoc
> Wiki pages
> JIRA comments
> The most frequently used method is JIRA comments and Wiki pages.  Both 
> methods are prone to data loss through passage of time.  We will need a more 
> effortless approach to maintain REST API documentation.  Swagger YAML can 
> also be out of sync with reality, if new methods are added to java code 
> directly.  Javadoc annotation seems like a good approach to maintain REST API 
> document.  Both Jersey and Atlassian community has maven plugin to help 
> generating REST API document, but those maven plugins have ceased to 
> function.  After searching online for REST API documentation for a bit, 
> [apidoc|http://apidocjs.com/] is one library that stand out.  This could be 
> the ideal approach to manage Hadoop REST API document.  It supports javadoc 
> like annotations, and generate beautiful schema changes documentation.
> If this is accepted, I will add apidoc installation to dev-support 
> Dockerfile, and pom.xml changes for javadoc plugin to ignore the custom tags.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177342#comment-16177342
 ] 

Hadoop QA commented on HADOOP-14901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-14901 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888604/HADOOP-14901-brnach-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13365/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177340#comment-16177340
 ] 

Miklos Szegedi commented on HADOOP-14897:
-

Thank you, [~templedf], I do not have any more comments. +1 (non-binding)

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch, HADOOP-14897.002.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Status: Patch Available  (was: Reopened)

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Attachment: HADOOP-14901-brnach-2.001.patch

Thanks [~anu] for reviewing and committing the patch. 
I have uploaded the patch for branch-2.

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reopened HADOOP-14901:
-

Patch for branch-2

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch, HADOOP-14901-brnach-2.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177313#comment-16177313
 ] 

Hudson commented on HADOOP-14901:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12953 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12953/])
HADOOP-14901. ReuseObjectMapper in Hadoop Common. Contributed by Hanisha 
(aengineer: rev e1b32e0959dea5f5a40055157476f9320519a618)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/log/Log4Json.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/HttpExceptionUtils.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/MetricsJsonBuilder.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/delegation/web/DelegationTokenAuthenticator.java


> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177306#comment-16177306
 ] 

Hadoop QA commented on HADOOP-14897:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14897 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888591/HADOOP-14897.002.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux e22203c2b594 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 164a063 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13364/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch, HADOOP-14897.002.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Anu Engineer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anu Engineer updated HADOOP-14901:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~hanishakoneru] I have committed this to the trunk. Can you please provide a 
branch-2 once variable seems to have renamed from map to obj. Since the 
Cherry-pick does not apply directly I would rather have the patch attached to 
this JIRA. Please re-open when you have the branch-2 patch.

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177286#comment-16177286
 ] 

Anu Engineer commented on HADOOP-14901:
---

[~hanishakoneru] thanks for identifying and fixing this issue. This is a kind 
of issue that worries me when I commit. Hadoop common might impact the 
downstream, unfortunately, we don't get a test coverage till we commit. So I am 
going to commit to trunk and wait for 3.0 beta1 to be released.

+1, I will commit this shortly in the trunk.

cc: [~andrew.wang] I will get this change in 3.0 after the beta1 is cut, this 
is performance change that 3.0 beta1 can live without.



> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14897:
--
Attachment: HADOOP-14897.002.patch

Fixed the link.  [~miklos.szeg...@cloudera.com], any comments?

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch, HADOOP-14897.002.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14897:
--
Status: Patch Available  (was: Open)

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch, HADOOP-14897.002.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177249#comment-16177249
 ] 

Chris Douglas commented on HADOOP-14897:


Thanks, Daniel. Skimming RFC-2119 that looks like a standard we can enforce. 
Other than fixing the Evolving -> Stable link, lgtm. +1

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177241#comment-16177241
 ] 

Anu Engineer commented on HADOOP-14655:
---

I just merged this change to Ozone (HDFS-7240) branch and I can confirm that  
HDFS-12527 is now resolved. [~andrew.wang] / [~elek] Appreciate your help 
unblocking compilation on ozone branch. 

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177240#comment-16177240
 ] 

Hudson commented on HADOOP-14655:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12951 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12951/])
Revert "HADOOP-14655. Update httpcore version to 4.4.6. (rchiang)" (wang: rev 
8d29bf57ca97a94e6f6ee663bcaa5b7bc390f850)
* (edit) hadoop-project/pom.xml


> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177232#comment-16177232
 ] 

Hadoop QA commented on HADOOP-14901:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
16s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m  
8s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 24m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 24m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
13s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 52 unchanged - 1 fixed = 52 total (was 53) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 18s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 2s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 32s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestShellBasedUnixGroupsMapping |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14901 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888556/HADOOP-14901.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux cef53a602718 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08fca50 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13360/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13360/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13360/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901

[jira] [Updated] (HADOOP-13714) Tighten up our compatibility guidelines for Hadoop 3

2017-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13714?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-13714:
--
Fix Version/s: 3.1.0

> Tighten up our compatibility guidelines for Hadoop 3
> 
>
> Key: HADOOP-13714
> URL: https://issues.apache.org/jira/browse/HADOOP-13714
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.3
>Reporter: Karthik Kambatla
>Assignee: Daniel Templeton
>Priority: Blocker
> Fix For: 3.0.0-beta1, 3.1.0
>
> Attachments: Compatibility.pdf, HADOOP-13714.001.patch, 
> HADOOP-13714.002.patch, HADOOP-13714.003.patch, HADOOP-13714.004.patch, 
> HADOOP-13714.005.patch, HADOOP-13714.006.patch, HADOOP-13714.007.patch, 
> HADOOP-13714.008.patch, HADOOP-13714.WIP-001.patch, 
> InterfaceClassification.pdf
>
>
> Our current compatibility guidelines are incomplete and loose. For many 
> categories, we do not have a policy. It would be nice to actually define 
> those policies so our users know what to expect and the developers know what 
> releases to target their changes. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFileSystem

2017-09-22 Thread Manoj Govindassamy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manoj Govindassamy updated HADOOP-13055:

Attachment: HADOOP-13055.08.patch

Thanks for the review comments [~eddyxu]. Attached v08 patch addressing your 
review comments. Please take a look.

bq. Can you add more comments here to clarify what is the concept of internal 
dir, ..
Done.

bq. SINGLE_FALLBACK, MERGE_SLASH, Can you add more comments on both of them? 
What is the difference?
Done.

bq. How to guarantee that there is at most one LinkType.SINGLE_FALLBACK 
instance.
{{INodeTree#INodeDir#setRootFallbackLink}}
{noformat}
void setRootFallbackLink(INodeLink fallbackLink) {
  Preconditions.checkState(isRoot());
  Preconditions.checkState(getRootFallbackLink() == null);
  this.rootFallbackLink = fallbackLink;
}
{noformat}
The above Precondition allows only one link fallback for the mount table link 
entries.




> Implement linkMergeSlash for ViewFileSystem
> ---
>
> Key: HADOOP-13055
> URL: https://issues.apache.org/jira/browse/HADOOP-13055
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, viewfs
>Affects Versions: 2.7.5
>Reporter: Zhe Zhang
>Assignee: Manoj Govindassamy
> Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, 
> HADOOP-13055.02.patch, HADOOP-13055.03.patch, HADOOP-13055.04.patch, 
> HADOOP-13055.05.patch, HADOOP-13055.06.patch, HADOOP-13055.07.patch, 
> HADOOP-13055.08.patch
>
>
> In a multi-cluster environment it is sometimes useful to operate on the root 
> / slash directory of an HDFS cluster. E.g., list all top level directories. 
> Quoting the comment in {{ViewFs}}:
> {code}
>  *   A special case of the merge mount is where mount table's root is merged
>  *   with the root (slash) of another file system:
>  *   
>  *   fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/
>  *   
>  *   In this cases the root of the mount table is merged with the root of
>  *hdfs://nn99/  
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Daniel Templeton (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Templeton updated HADOOP-14897:
--
Attachment: HADOOP-14897.001.patch

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
> Attachments: HADOOP-14897.001.patch
>
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177222#comment-16177222
 ] 

Hadoop QA commented on HADOOP-14903:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
6s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14903 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888575/HADOOP-14903.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 80f77b8a073c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 
18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8d29bf5 |
| Default Java | 1.8.0_144 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13362/testReport/ |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13362/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch, HADOOP-14903.002.patch, 
> HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- 

[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177208#comment-16177208
 ] 

Daniel Templeton commented on HADOOP-14897:
---

How about this:

{quote}The minimum required versions of the native components on which Hadoop 
depends
at compile time and/or runtime SHALL be considered
\[Evolving\](./InterfaceClassification.html#Stable). Changes to the minimum
required versions SHOULD NOT increase between minor releases within a major
version, though updates because of security issues, license issues, or other
reasons may occur. When the native components on which Hadoop depends must
be updated between minor releases within a major release, where possible the
changes SHOULD only change the minor versions of the components without
changing the major versions.{quote}

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177202#comment-16177202
 ] 

Elek, Marton commented on HADOOP-14655:
---

It is commited at 2017-09-11. 

According to the output of
 
{code}
git lg -Sorg.apache.http apache/branch-3.0
git lg -Sorg.apache.http apache/trunk
{code}

The only commit which includes the org.apache.http string (in the patch) is 
"HADOOP-14738 Remove S3N and obsolete bits of S3A; rework docs.  Contributed by 
Steve Loughran."

My guess is that it doesn't use any new features from httpcore 4.4.6, but 
[~steve_l] could confirm it.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177199#comment-16177199
 ] 

Anu Engineer commented on HADOOP-14655:
---

Thanks [~andrew.wang] Appreciate the quick response and the revert. 

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang reopened HADOOP-14655:
--

Reverted this JIRA from trunk and branch-3.0 per Marton's instructions.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14655:
-
Fix Version/s: (was: 3.0.0-beta1)

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14654) Update httpclient version to 4.5.3

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14654:
-
Fix Version/s: (was: 3.0.0-beta1)

> Update httpclient version to 4.5.3
> --
>
> Key: HADOOP-14654
> URL: https://issues.apache.org/jira/browse/HADOOP-14654
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14654.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpclient:4.5.2
> to the latest (4.5.3).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177186#comment-16177186
 ] 

Andrew Wang commented on HADOOP-14655:
--

I'll go ahead and revert and reopen, thanks for the heads up.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Anu Engineer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177182#comment-16177182
 ] 

Anu Engineer commented on HADOOP-14655:
---

[~andrew.wang] / [~rchiang] is it ok to revert from 3.0/31 branches? is anyone 
depending on this updates?

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177177#comment-16177177
 ] 

Elek, Marton edited comment on HADOOP-14655 at 9/22/17 9:40 PM:


It also causes HDFS-12527. I wrote the details there, but I am almost sure that 
it should be reverted on 3.0/trunk/HDFS-7240

The problem is that we need to change the org.apache.httpcomponents:httpclient 
and org.apache.httpcomponents:httpcore versions together.

httpclient 4.5.3 depends on httpcore 4.4.6
httpclient 4.5.2 depends on httpcore 4.4.4

The problem is that we bumped both of them but we reverted only the httpclient 
back to 4.5.2. now we have httpclient 4.5.2 ahd httpcore 4.4.6 which could 
cause funny things (eg. HDFS-12527).

For example httpcore 4.4.4 contains org/apache/http/annotation/ThreadSafe.java 
but httpcore 4.4.6. doesn't. httpclient 4.5.2 uses ThreadSafe everywhere but 
4.5.3 does not.

I recommend to revert both of them, or in the current sitation, revert this 
jira, too.


was (Author: elek):
It also causes HDFS-12527. I wrote the details there, but I am almost sure that 
it should be reverted on 3.0/trunk/HDFS-7240

The problem is that we need to change the org.apache.httpcomponents:httpclient 
and org.apache.httpcomponents:httpcore versions together.

httpclient 4.5.3 depends on httpcore 4.4.6
httpclient 4.5.2 depends on httpcore 4.4.4

The problem is that we bumped both of them but we reverted the httpclient back 
to 4.5.2 no we have httpclient 4.5.2 ahd httpcore 4.4.6 which could cause funny 
things (eg. HDFS-12527).

For example httpcore 4.4.4 contains org/apache/http/annotation/ThreadSafe.java 
but httpcore 4.4.6. doesn't. httpclient 4.5.2 uses ThreadSafe everywhere but 
4.5.3 does not.

I recommend to revert both of them.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14459) SerializationFactory shouldn't throw a NullPointerException if the serializations list is not defined

2017-09-22 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177179#comment-16177179
 ] 

Daniel Templeton commented on HADOOP-14459:
---

Works for me.  Last thing is that the spacing is off on all the comment lines 
but the first, and you may as well make it a javadoc comment.

> SerializationFactory shouldn't throw a NullPointerException if the 
> serializations list is not defined
> -
>
> Key: HADOOP-14459
> URL: https://issues.apache.org/jira/browse/HADOOP-14459
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Nandor Kollar
>Assignee: Nandor Kollar
>Priority: Minor
> Attachments: HADOOP-14459_2.patch, HADOOP-14459_3.patch, 
> HADOOP-14459_4.patch, HADOOP-14459.patch
>
>
> The SerializationFactory throws an NPE if 
> CommonConfigurationKeys.IO_SERIALIZATIONS_KEY is not defined in the config.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14655) Update httpcore version to 4.4.6

2017-09-22 Thread Elek, Marton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177177#comment-16177177
 ] 

Elek, Marton commented on HADOOP-14655:
---

It also causes HDFS-12527. I wrote the details there, but I am almost sure that 
it should be reverted on 3.0/trunk/HDFS-7240

The problem is that we need to change the org.apache.httpcomponents:httpclient 
and org.apache.httpcomponents:httpcore versions together.

httpclient 4.5.3 depends on httpcore 4.4.6
httpclient 4.5.2 depends on httpcore 4.4.4

The problem is that we bumped both of them but we reverted the httpclient back 
to 4.5.2 no we have httpclient 4.5.2 ahd httpcore 4.4.6 which could cause funny 
things (eg. HDFS-12527).

For example httpcore 4.4.4 contains org/apache/http/annotation/ThreadSafe.java 
but httpcore 4.4.6. doesn't. httpclient 4.5.2 uses ThreadSafe everywhere but 
4.5.3 does not.

I recommend to revert both of them.

> Update httpcore version to 4.4.6
> 
>
> Key: HADOOP-14655
> URL: https://issues.apache.org/jira/browse/HADOOP-14655
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14655.001.patch
>
>
> Update the dependency
> org.apache.httpcomponents:httpcore:4.4.4
> to the latest (4.4.6).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177077#comment-16177077
 ] 

Bharat Viswanadham edited comment on HADOOP-14881 at 9/22/17 9:37 PM:
--

Hi Jason,
Thank You for information.
Updated the patch to revert close change and also addressed your review 
comments.
Attached v03 patch.



was (Author: bharatviswa):
Hi Jason,
Thank You for information.
Updated the patch to revert close change.
Attached v03 patch.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177153#comment-16177153
 ] 

Hadoop QA commented on HADOOP-14881:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
43s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 51 unchanged - 4 fixed = 51 total (was 55) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
54s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14881 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888560/HADOOP-14881.03.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 55dbb58c80b2 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / b133dc5 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13361/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13361/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14903:

Attachment: HADOOP-14903.003.patch

* Forgot to add JIRA reference

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch, HADOOP-14903.002.patch, 
> HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14903:

Affects Version/s: 3.0.0-beta1

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch, HADOOP-14903.002.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14903:

Attachment: HADOOP-14903.002.patch

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch, HADOOP-14903.002.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14903:

Status: Patch Available  (was: Open)

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch, HADOOP-14903.002.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Hanisha Koneru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177115#comment-16177115
 ] 

Hanisha Koneru commented on HADOOP-14902:
-

Thanks for reporting this bug, [~jlowe].
As per your 
[comment|https://issues.apache.org/jira/browse/HADOOP-14881?focusedCommentId=16177074=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16177074]
 in HADOOP-14881, I agree that we should not do double close as a norm. 
And I think we should not update the metric if close fails. The metric tracks 
the execution time for close operation. If we add the time when exception is 
thrown, it would pollute the metric.
What do you think?

> LoadGenerator#genFile write close timing is incorrectly calculated
> --
>
> Key: HADOOP-14902
> URL: https://issues.apache.org/jira/browse/HADOOP-14902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Hanisha Koneru
>
> LoadGenerator#genFile's write close timing code looks like the following:
> {code}
> startTime = Time.now();
> executionTime[WRITE_CLOSE] += (Time.now() - startTime);
> {code}
> That code will generate a zero (or near zero) write close timing since it 
> isn't actually closing the file in-between timestamp lookups.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177085#comment-16177085
 ] 

Andrew Wang commented on HADOOP-14903:
--

Little nit, could you move this next to the nimbus dependency, and mention in 
the comment that json-smart is a transitive dependency, reference this JIRA, 
and to recheck this hack when upgrading nimbus? I think the warning in minikdc 
is not the important part, since this gets pulled in by hadoop-auth which gets 
used in a lot of places.

Otherwise looks good +1, based on the discussion in HADOOP-14799.

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ray Chiang updated HADOOP-14903:

Attachment: HADOOP-14903.001.patch

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Attachments: HADOOP-14903.001.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177077#comment-16177077
 ] 

Bharat Viswanadham commented on HADOOP-14881:
-

Hi Jason,
Thank You for information.
Updated the patch to revert close change.
Attached v03 patch.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-14881:

Attachment: HADOOP-14881.03.patch

> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-14881:

Attachment: (was: HADOOP-14881.03.patch)

> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177074#comment-16177074
 ] 

Jason Lowe commented on HADOOP-14881:
-

Thanks for updating the patch!

It's not quite that simple to fix the close issue.  If {{out.close()}} throws 
then we won't update the metrics and we'll also end up double-closing due to 
the processing in the {{finally}} block.  Double-close _should_ be OK, but I'd 
rather not do it as the norm.  The previous change in HADOOP-10328 implied they 
want to suppress the exceptions during close, which we should discuss along 
with the questions of whether metrics should or should not be updated when a 
close fails.  Therefore I think we should not complicate this fix with the 
close issue and leave that for HADOOP-14902.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14903) Add json-smart explicitly to pom.xml

2017-09-22 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-14903:
---

 Summary: Add json-smart explicitly to pom.xml
 Key: HADOOP-14903
 URL: https://issues.apache.org/jira/browse/HADOOP-14903
 Project: Hadoop Common
  Issue Type: Bug
  Components: common
Reporter: Ray Chiang
Assignee: Ray Chiang


With the library update in HADOOP-14799, maven knows how to pull in 
net.minidev:json-smart for tests, but not for packaging.  This needs to be 
added to the main project pom in order to avoid this warning:

{noformat}
[WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
dependency information available
{noformat}

This is pulled in from a few places:

{noformat}
[INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
[INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
[INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile

[INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
[INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
[INFO] |  |+- 
com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
[INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bharat Viswanadham updated HADOOP-14881:

Attachment: HADOOP-14881.03.patch

> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch, 
> HADOOP-14881.03.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177059#comment-16177059
 ] 

Bharat Viswanadham edited comment on HADOOP-14881 at 9/22/17 8:21 PM:
--

Hi Jason,
Thank You for review.
Good catch that closing file is not done.
As it is minor change, updated the code to fix that issue also.
Attached the patch v03.



was (Author: bharatviswa):
Hi Jason,
Thank You for review.
Good catch that closing file is not done.
As it is minor change, updated the code to fix that issue also.
Attached the patch v02.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177059#comment-16177059
 ] 

Bharat Viswanadham commented on HADOOP-14881:
-

Hi Jason,
Thank You for review.
Good catch that closing file is not done.
As it is minor change, updated the code to fix that issue also.
Attached the patch v02.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14886) gridmix/SleepReducer should use Time.monotonicNow for measuring durations

2017-09-22 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14886?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-14886:

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-14713)

> gridmix/SleepReducer should use Time.monotonicNow for measuring durations
> -
>
> Key: HADOOP-14886
> URL: https://issues.apache.org/jira/browse/HADOOP-14886
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Chetna Chaudhari
>Assignee: Chetna Chaudhari
>Priority: Minor
> Attachments: HADOOP-14886.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru reassigned HADOOP-14902:
---

Assignee: Hanisha Koneru

> LoadGenerator#genFile write close timing is incorrectly calculated
> --
>
> Key: HADOOP-14902
> URL: https://issues.apache.org/jira/browse/HADOOP-14902
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.4.0
>Reporter: Jason Lowe
>Assignee: Hanisha Koneru
>
> LoadGenerator#genFile's write close timing code looks like the following:
> {code}
> startTime = Time.now();
> executionTime[WRITE_CLOSE] += (Time.now() - startTime);
> {code}
> That code will generate a zero (or near zero) write close timing since it 
> isn't actually closing the file in-between timestamp lookups.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177049#comment-16177049
 ] 

Jason Lowe commented on HADOOP-14881:
-

Filed HADOOP-14902 to track the incorrect WRITE_CLOSE timing issue.


> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14902) LoadGenerator#genFile write close timing is incorrectly calculated

2017-09-22 Thread Jason Lowe (JIRA)
Jason Lowe created HADOOP-14902:
---

 Summary: LoadGenerator#genFile write close timing is incorrectly 
calculated
 Key: HADOOP-14902
 URL: https://issues.apache.org/jira/browse/HADOOP-14902
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.4.0
Reporter: Jason Lowe


LoadGenerator#genFile's write close timing code looks like the following:
{code}
startTime = Time.now();
executionTime[WRITE_CLOSE] += (Time.now() - startTime);
{code}

That code will generate a zero (or near zero) write close timing since it isn't 
actually closing the file in-between timestamp lookups.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Attachment: HADOOP-14901.001.patch

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14901:

Status: Patch Available  (was: Open)

> ReuseObjectMapper in Hadoop Common
> --
>
> Key: HADOOP-14901
> URL: https://issues.apache.org/jira/browse/HADOOP-14901
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
>Priority: Minor
> Attachments: HADOOP-14901.001.patch
>
>
> It is recommended to reuse ObjectMapper, if possible, for better performance. 
> We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
> some places: they are straightforward and thread safe.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14881) LoadGenerator should use Time.monotonicNow() to measure durations

2017-09-22 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177035#comment-16177035
 ] 

Jason Lowe commented on HADOOP-14881:
-

Thanks for updating the patch!

bq. also updated the variable startTime inside functions to startTime1 to 
resolve checkstyle issue

Nit: could we use something like {{startTimestamp}} or just {{timestamp}} 
rather than {{startTime1}}?  The latter implies there's a {{startTime2}} 
counterpart.

Looking closer there also appears to be a bug here:
{code}
-startTime = Time.now();
-executionTime[WRITE_CLOSE] += (Time.now() - startTime);
+startTime1 = Time.monotonicNow();
+executionTime[WRITE_CLOSE] += (Time.monotonicNow() - startTime1);
{code}

Both before and after the patch, it was getting the current time into a local 
then simply subtracting the current time from that timestamp as the execution 
time of the close of the file, but does not actually close the file.  I'll file 
a separate JIRA since this bug existed even in the previous code and isn't 
related to converting to monotonicNow().

> LoadGenerator should use Time.monotonicNow() to measure durations
> -
>
> Key: HADOOP-14881
> URL: https://issues.apache.org/jira/browse/HADOOP-14881
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Chetna Chaudhari
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14881.01.patch, HADOOP-14881.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14888) Use apidoc for REST API documentation

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177034#comment-16177034
 ] 

Hadoop QA commented on HADOOP-14888:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  4m 
55s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
36s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
22s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m 
10s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  5m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 39s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}114m 55s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14888 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888403/HADOOP-14888.001.patch
 |
| Optional Tests |  asflicense  shellcheck  shelldocs  compile  javac  javadoc  
mvninstall  mvnsite  unit  xml  |
| uname | Linux c6b1023a6215 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 08fca50 |
| Default Java | 1.8.0_144 |
| shellcheck | v0.4.6 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13359/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13359/testReport/ |
| modules | C: hadoop-project . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13359/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Use apidoc for REST API documentation
> -
>
> Key: HADOOP-14888
> URL: https://issues.apache.org/jira/browse/HADOOP-14888

[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16177016#comment-16177016
 ] 

Hadoop QA commented on HADOOP-14872:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
47s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 41s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 58s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}197m 44s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
|   | hadoop.hdfs.server.namenode.TestReencryption |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14872 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888523/HADOOP-14872.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 09236aa5d204 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 908d8e9 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13358/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13358/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13358/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 

[jira] [Resolved] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2017-09-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang resolved HADOOP-14799.
--
Resolution: Fixed

Let's re-resolve and track the follow-on work in another JIRA. Thanks Ray, 
Steve.

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14901) ReuseObjectMapper in Hadoop Common

2017-09-22 Thread Hanisha Koneru (JIRA)
Hanisha Koneru created HADOOP-14901:
---

 Summary: ReuseObjectMapper in Hadoop Common
 Key: HADOOP-14901
 URL: https://issues.apache.org/jira/browse/HADOOP-14901
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hanisha Koneru
Assignee: Hanisha Koneru
Priority: Minor


It is recommended to reuse ObjectMapper, if possible, for better performance. 
We can aslo use ObjectReader or ObjectWriter to replace the ObjectMapper in 
some places: they are straightforward and thread safe.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14900) Errors in trunk with early versions of Java 8

2017-09-22 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176920#comment-16176920
 ] 

Ray Chiang commented on HADOOP-14900:
-

Another data point: 1.8u40 works.

> Errors in trunk with early versions of Java 8
> -
>
> Key: HADOOP-14900
> URL: https://issues.apache.org/jira/browse/HADOOP-14900
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>
> Just to document the issue in case other developers run into this.  Compiling 
> trunk with jdk 1.8u05 gives the following errors when compiling:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aws: Compilation failure: Compilation 
> failure:
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[45,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[69,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[94,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[120,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> {noformat}
> Based on my testing jdk 1.8u92 doesn't produce this error.
> I don't think this issue needs to be fixed in the code, but documenting it in 
> JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176913#comment-16176913
 ] 

Thomas Marquardt commented on HADOOP-14768:
---

I had two goals: 1) reduce risk and 2) allow you to proceed quickly.  I believe 
the change is still risky.  I understand the desire to reduce duplication, but 
the new code path can replace the old code path after you have feedback on the 
new code path.  

Let me provide another solution which I think would still meet the goals:

{code:java}
public boolean delete(...) throws IOException {
  if (azureAuthorization && isStickyBitCheckViolated(...)) {
throw new WasbAuthorizationException("Sticky bit violation.");
  }
  // current delete code goes here
}
{code}



> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14900) Errors in trunk with early versions of Java 8

2017-09-22 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14900?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176889#comment-16176889
 ] 

Ray Chiang commented on HADOOP-14900:
-

I'll leave this JIRA open a few days for comments, but will otherwise close it 
and perhaps add a Release Note for 3.0.0 GA.

> Errors in trunk with early versions of Java 8
> -
>
> Key: HADOOP-14900
> URL: https://issues.apache.org/jira/browse/HADOOP-14900
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>
> Just to document the issue in case other developers run into this.  Compiling 
> trunk with jdk 1.8u05 gives the following errors when compiling:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
> (default-testCompile) on project hadoop-aws: Compilation failure: Compilation 
> failure:
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[45,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[69,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[94,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> [ERROR] 
> /root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[120,5]
>  reference to intercept is ambiguous
> [ERROR]   both method 
> intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
>  in org.apache.hadoop.test.LambdaTestUtils and method 
> intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
>  in org.apache.hadoop.test.LambdaTestUtils match
> {noformat}
> Based on my testing jdk 1.8u92 doesn't produce this error.
> I don't think this issue needs to be fixed in the code, but documenting it in 
> JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14900) Errors in trunk with early versions of Java 8

2017-09-22 Thread Ray Chiang (JIRA)
Ray Chiang created HADOOP-14900:
---

 Summary: Errors in trunk with early versions of Java 8
 Key: HADOOP-14900
 URL: https://issues.apache.org/jira/browse/HADOOP-14900
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 3.0.0-beta1
Reporter: Ray Chiang
Assignee: Ray Chiang


Just to document the issue in case other developers run into this.  Compiling 
trunk with jdk 1.8u05 gives the following errors when compiling:

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:testCompile 
(default-testCompile) on project hadoop-aws: Compilation failure: Compilation 
failure:
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[45,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[69,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[94,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
[ERROR] 
/root/hadoop/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmValidation.java:[120,5]
 reference to intercept is ambiguous
[ERROR]   both method 
intercept(java.lang.Class,java.lang.String,org.apache.hadoop.test.LambdaTestUtils.VoidCallable)
 in org.apache.hadoop.test.LambdaTestUtils and method 
intercept(java.lang.Class,java.lang.String,java.util.concurrent.Callable)
 in org.apache.hadoop.test.LambdaTestUtils match
{noformat}

Based on my testing jdk 1.8u92 doesn't produce this error.

I don't think this issue needs to be fixed in the code, but documenting it in 
JIRA.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176875#comment-16176875
 ] 

Miklos Szegedi commented on HADOOP-14897:
-

bq. To use this example, I would not require that Hadoop compile and run 
against a particular build of openjdk. That's too restrictive.
Thank you [~chris.douglas] for the feedback. Here is the use case I am thinking 
about. Let's say I run my cluster with Hadoop 5.1 compiling with Java7. 
Auditing Java8 takes lots of effort and potential issues, so in order to 
install 5.2 with minor bug fixes, I think I should not be required to install 
Java8.
However, to follow your logic, if there is no public support for Java7 anymore 
a minor release should update to Java 8. That is indeed something to consider.

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176862#comment-16176862
 ] 

Varada Hemeswari commented on HADOOP-14768:
---

[~tmarquardt] The code when auth is not enabled in patch 5 is completely the 
same as it used to be previously with the exception that now the code handles 
delete case when delete is issued for '/'(root path). Previously it used to 
throw null ponter exception. Those are the changes you see apart from that 
there is clear branching when we are getting the contents to delete the file.

Maintaining seperate paths from the beginning would be risky since changes done 
to one may not be done in another. And it would be lot of duplicate code too. 
Also I have tested majority of the delete scenarios in both auth enabled and 
disabled cases too. 

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14888) Use apidoc for REST API documentation

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176842#comment-16176842
 ] 

Hadoop QA commented on HADOOP-14888:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13359/console in case of 
problems.


> Use apidoc for REST API documentation
> -
>
> Key: HADOOP-14888
> URL: https://issues.apache.org/jira/browse/HADOOP-14888
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: HADOOP-14888.001.patch
>
>
> There are more REST API being developed in Hadoop, and it would be great to 
> standardize on the method of generate REST API document.
> There are several method done today:
> Swagger YAML
> Javadoc
> Wiki pages
> JIRA comments
> The most frequently used method is JIRA comments and Wiki pages.  Both 
> methods are prone to data loss through passage of time.  We will need a more 
> effortless approach to maintain REST API documentation.  Swagger YAML can 
> also be out of sync with reality, if new methods are added to java code 
> directly.  Javadoc annotation seems like a good approach to maintain REST API 
> document.  Both Jersey and Atlassian community has maven plugin to help 
> generating REST API document, but those maven plugins have ceased to 
> function.  After searching online for REST API documentation for a bit, 
> [apidoc|http://apidocjs.com/] is one library that stand out.  This could be 
> the ideal approach to manage Hadoop REST API document.  It supports javadoc 
> like annotations, and generate beautiful schema changes documentation.
> If this is accepted, I will add apidoc installation to dev-support 
> Dockerfile, and pom.xml changes for javadoc plugin to ignore the custom tags.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) parallel tests don't work for Windows

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176840#comment-16176840
 ] 

Hadoop QA commented on HADOOP-14696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
51s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 
47s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  7s{color} | {color:orange} root: The patch generated 1 new + 15 unchanged - 
0 fixed = 16 total (was 15) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 57s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}112m 20s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
57s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
36s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}249m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
|   | hadoop.util.TestBasicDiskValidator |
|   | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestReencryptionWithKMS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12887479/HADOOP-14696.07.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux bd4c8e7b8eff 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 

[jira] [Updated] (HADOOP-14888) Use apidoc for REST API documentation

2017-09-22 Thread Eric Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Yang updated HADOOP-14888:
---
Status: Patch Available  (was: Open)

This patch includes:

- Update to a newer version of nodejs for apidoc
- Add apidoc to Dockerfile
- Add to javadoc to ignore apidoc custom tags
- Activate apidoc if there are java code in the project.

> Use apidoc for REST API documentation
> -
>
> Key: HADOOP-14888
> URL: https://issues.apache.org/jira/browse/HADOOP-14888
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, documentation
>Reporter: Eric Yang
>Assignee: Eric Yang
> Attachments: HADOOP-14888.001.patch
>
>
> There are more REST API being developed in Hadoop, and it would be great to 
> standardize on the method of generate REST API document.
> There are several method done today:
> Swagger YAML
> Javadoc
> Wiki pages
> JIRA comments
> The most frequently used method is JIRA comments and Wiki pages.  Both 
> methods are prone to data loss through passage of time.  We will need a more 
> effortless approach to maintain REST API documentation.  Swagger YAML can 
> also be out of sync with reality, if new methods are added to java code 
> directly.  Javadoc annotation seems like a good approach to maintain REST API 
> document.  Both Jersey and Atlassian community has maven plugin to help 
> generating REST API document, but those maven plugins have ceased to 
> function.  After searching online for REST API documentation for a bit, 
> [apidoc|http://apidocjs.com/] is one library that stand out.  This could be 
> the ideal approach to manage Hadoop REST API document.  It supports javadoc 
> like annotations, and generate beautiful schema changes documentation.
> If this is accepted, I will add apidoc installation to dev-support 
> Dockerfile, and pom.xml changes for javadoc plugin to ignore the custom tags.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176798#comment-16176798
 ] 

Thomas Marquardt commented on HADOOP-14768:
---

I opened HADOOP-14768.005.patch and still see major changes to 
{{NativeAzureFileSystem.delete}}.  

The point of my earlier feedback is that we should minimize risk for medium to 
high risk changes that are going to be ported to branch-2.  We can do this by 
using config to enable the new functionality without a significant impact to 
existing functionality.

What I had in mind is that you would leave the current delete code as-is and 
add a new delete method for the case when authorization is enabled:

{code:java}

public boolean delete(...) throws IOException {
  if (azureAuthorization) {
return deleteWithAuthEnabled(...);
  }
  // current delete code goes here
}

private boolean deleteWithAuthEnabled(...) throws IOException {
  // new delete code for the case when authorization is enabled goes here
}
{code}




> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176793#comment-16176793
 ] 

Chris Douglas commented on HADOOP-14897:


bq. I would assume that a minor release requires only minor releases in 
dependencies
That's a reasonable proposal. As I read the current draft, upgrading minor 
versions of dependencies is prohibited.

bq. C++ standards before C+14, this requirement would assume that 5.1 will not 
add a requirement of C+14 as a minimum
Also reasonable, but also different from the current text.

bq. would you allow Hadoop 2.9 ship with a requirement of Java 8, if 2.8 
depended on Java 7 before?
To use this example, I would not require that Hadoop compile and run against a 
particular build of openjdk. That's too restrictive.

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Miklos Szegedi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176747#comment-16176747
 ] 

Miklos Szegedi commented on HADOOP-14897:
-

[~chris.douglas]. Given we are talking about external dependencies, we cannot 
assume that they use the same versioning standard. However, talking about 
Hadoop versioning standard, I would assume that a minor release requires only 
minor releases in dependencies. Let's be more specific with an example. If we 
ship Hadoop 5.0 with support for C++ standards before C++14, this requirement 
would assume that 5.1 will not add a requirement of C++14 as a minimum.
In case of Java I would rephrase this question, would you allow Hadoop 2.9 ship 
with a requirement of Java 8, if 2.8 depended on Java 7 before?

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176725#comment-16176725
 ] 

Varada Hemeswari commented on HADOOP-14768:
---

[~ste...@apache.org]Intrestingly it doesnt fail when I use this command to run 
the tests
{code}
mvn -Dtest=ITestWasbUriAndConfiguration#testCredsFromCredentialProvider test
{code}

However it fails if I use 
{code}
mvn -T 1C -Dparallel-tests -DtestsThreadCount=8 clean verify
{code}

The parellelization seems to be causing some issue. I am not quite familiar 
with debugging it. Can you take it from here?

Also can you please take a look at the patch. I am blocked on this to work on 
sticky bit for rename.
Thanks.

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176718#comment-16176718
 ] 

Chris Douglas commented on HADOOP-14897:


Thanks [~templedf]. This was filed neutrally, in case there are really 
compelling reasons for the requirement.

[~miklos.szeg...@cloudera.com], you raised this in HADOOP-13714. I agree that 
patch versions should, except when addressing security vulnerabilities or other 
"must fix" issues, remain stable between patch releases. Given that our major 
releases tend to be long-lived, what would be a reasonable policy for native 
deps in minor releases? Is there a set of standards used by packagers we could 
adopt?

If our compatibility guidelines are setting users' expectations, I hesitate to 
promise more than best-effort, here.

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176689#comment-16176689
 ] 

Steve Loughran commented on HADOOP-14768:
-

bq.  the test failure 
ITestWasbUriAndConfiguration.testCredsFromCredentialProvider is not related to 
my patch since it was failing even without my changes.

Its interesting that it is failing for you though, because I'm not. And the 
stack trace implies that 
{{AzureNativeFileSystemStore.getAccountKeyFromConfiguration(account, conf)}} is 
returning null for you. Could you take a look at the test in the debugger & see 
what's going on? As it is either a test which is brittle to some people's 
setups (yours), or there's an actual problem in the production code which are 
lucky to see consistently on one machine...

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14897) Loosen compatibility guidelines for native dependencies

2017-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176684#comment-16176684
 ] 

Allen Wittenauer commented on HADOOP-14897:
---

Probably worth pointing out that as currently written, the JRE (read: runtime) 
requirements are less strict than the C compiler (read: build time) 
requirements.

> Loosen compatibility guidelines for native dependencies
> ---
>
> Key: HADOOP-14897
> URL: https://issues.apache.org/jira/browse/HADOOP-14897
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, native
>Reporter: Chris Douglas
>Assignee: Daniel Templeton
>Priority: Blocker
>
> Within a major version, the compatibility guidelines forbid raising the 
> minimum required version of any native dependency or tool required to build 
> native components.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14872) CryptoInputStream should implement unbuffer

2017-09-22 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14872:

Attachment: HADOOP-14872.005.patch

Patch 005
* Rename some capabilites. Add prefix "in:" to UNBUFFER and READAHEAD.

> CryptoInputStream should implement unbuffer
> ---
>
> Key: HADOOP-14872
> URL: https://issues.apache.org/jira/browse/HADOOP-14872
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, 
> HADOOP-14872.003.patch, HADOOP-14872.004.patch, HADOOP-14872.005.patch
>
>
> Discovered in IMPALA-5909.
> Opening an encrypted HDFS file returns a chain of wrapped input streams:
> {noformat}
> HdfsDataInputStream
>   CryptoInputStream
> DFSInputStream
> {noformat}
> If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, 
> FSDataInputStream#unbuffer will be called:
> {code:java}
> try {
>   ((CanUnbuffer)in).unbuffer();
> } catch (ClassCastException e) {
>   throw new UnsupportedOperationException("this stream does not " +
>   "support unbuffering.");
> }
> {code}
> If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If 
> the application is not careful, tons of UOEs will show up in logs.
> In comparison, opening an non-encrypted HDFS file returns this chain:
> {noformat}
> HdfsDataInputStream
>   DFSInputStream
> {noformat}
> DFSInputStream implements CanUnbuffer.
> It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons:
> * Release buffer, cache, or any other resource when instructed
> * Able to call its wrapped DFSInputStream unbuffer
> * Avoid the UOE described above. Applications may not handle the UOE very 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB

2017-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14899:

Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-14552

> Restrict Access to setPermission operation when authorization is enabled in 
> WASB
> 
>
> Key: HADOOP-14899
> URL: https://issues.apache.org/jira/browse/HADOOP-14899
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Kannapiran Srinivasan
>Assignee: Kannapiran Srinivasan
>  Labels: fs, secure, wasb
>
> In case of authorization enabled Wasb clusters, we need to restrict setting 
> permissions on files or folders to owner or list of privileged users.
> Currently in the WASB implementation even when authorization is enabled there 
> is no check happens while doing setPermission call. In this JIRA we would 
> like to add the check on the setPermission call in NativeAzureFileSystem 
> implementation so that only owner or the privileged list of users can change 
> the permissions of files/folders



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB

2017-09-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-14899:
---

Assignee: Kannapiran Srinivasan

> Restrict Access to setPermission operation when authorization is enabled in 
> WASB
> 
>
> Key: HADOOP-14899
> URL: https://issues.apache.org/jira/browse/HADOOP-14899
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Reporter: Kannapiran Srinivasan
>Assignee: Kannapiran Srinivasan
>  Labels: fs, secure, wasb
>
> In case of authorization enabled Wasb clusters, we need to restrict setting 
> permissions on files or folders to owner or list of privileged users.
> Currently in the WASB implementation even when authorization is enabled there 
> is no check happens while doing setPermission call. In this JIRA we would 
> like to add the check on the setPermission call in NativeAzureFileSystem 
> implementation so that only owner or the privileged list of users can change 
> the permissions of files/folders



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Varada Hemeswari (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176491#comment-16176491
 ] 

Varada Hemeswari commented on HADOOP-14768:
---

[~tmarquardt], i have submitted [^HADOOP-14768.005.patch] with changes such 
that the new code path takes effect only when authorization is enabled . Please 
do review.

Also I have confirmed that the test failure 
*ITestWasbUriAndConfiguration.testCredsFromCredentialProvider* is not related 
to my patch since it was failing even without my changes.

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11123) Uber-JIRA: Hadoop on Java 9

2017-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176479#comment-16176479
 ] 

Allen Wittenauer commented on HADOOP-11123:
---

Adding a dependency on HADOOP-14816, which adds JDK9 to the Dockerfile as part 
of an upgrade to Ubuntu Xenial.

> Uber-JIRA: Hadoop on Java 9
> ---
>
> Key: HADOOP-11123
> URL: https://issues.apache.org/jira/browse/HADOOP-11123
> Project: Hadoop Common
>  Issue Type: Task
>Affects Versions: 3.0.0-alpha1
> Environment: Java 9
>Reporter: Steve Loughran
>
> JIRA to cover/track issues related to Hadoop on Java 9.
> Java 9 will have some significant changes —one of which is the removal of 
> various {{com.sun}} classes. These removals need to be handled or Hadoop will 
> not be able to run on a Java 9 JVM



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14816) Update Dockerfile to use Xenial

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176462#comment-16176462
 ] 

Hadoop QA commented on HADOOP-14816:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 25m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  2m  
7s{color} | {color:blue} Maven dependency ordering for branch {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 1s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} The patch generated 0 new + 74 unchanged - 30 fixed = 
74 total (was 104) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 28m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14816 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888401/HADOOP-14816.04.patch 
|
| Optional Tests |  asflicense  shellcheck  shelldocs  |
| uname | Linux 2030825f874d 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c71d137 |
| shellcheck | v0.4.6 |
| modules | C:  U:  |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13356/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch, HADOOP-14816.01.patch, 
> HADOOP-14816.02.patch, HADOOP-14816.03.patch, HADOOP-14816.04.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14816) Update Dockerfile to use Xenial

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176418#comment-16176418
 ] 

Hadoop QA commented on HADOOP-14816:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13356/console in case of 
problems.


> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch, HADOOP-14816.01.patch, 
> HADOOP-14816.02.patch, HADOOP-14816.03.patch, HADOOP-14816.04.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14816) Update Dockerfile to use Xenial

2017-09-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176414#comment-16176414
 ] 

Allen Wittenauer commented on HADOOP-14816:
---

We hit an issue with docker build on H10.  Gonna use this JIRA to test with.

> Update Dockerfile to use Xenial
> ---
>
> Key: HADOOP-14816
> URL: https://issues.apache.org/jira/browse/HADOOP-14816
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14816.00.patch, HADOOP-14816.01.patch, 
> HADOOP-14816.02.patch, HADOOP-14816.03.patch, HADOOP-14816.04.patch
>
>
> It's probably time to update the 3.0 Dockerfile to use Xenial given that 
> Trusty is on life support from Ubuntu.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB

2017-09-22 Thread Kannapiran Srinivasan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kannapiran Srinivasan updated HADOOP-14899:
---
Description: 
In case of authorization enabled Wasb clusters, we need to restrict setting 
permissions on files or folders to owner or list of privileged users.

Currently in the WASB implementation even when authorization is enabled there 
is no check happens while doing setPermission call. In this JIRA we would like 
to add the check on the setPermission call in NativeAzureFileSystem 
implementation so that only owner or the privileged list of users can change 
the permissions of files/folders

  was:In case of authorization enabled Wasb clusters, we need to restrict 
setting permissions on files or folders to owner or list of privileged users.


> Restrict Access to setPermission operation when authorization is enabled in 
> WASB
> 
>
> Key: HADOOP-14899
> URL: https://issues.apache.org/jira/browse/HADOOP-14899
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Reporter: Kannapiran Srinivasan
>  Labels: fs, secure, wasb
>
> In case of authorization enabled Wasb clusters, we need to restrict setting 
> permissions on files or folders to owner or list of privileged users.
> Currently in the WASB implementation even when authorization is enabled there 
> is no check happens while doing setPermission call. In this JIRA we would 
> like to add the check on the setPermission call in NativeAzureFileSystem 
> implementation so that only owner or the privileged list of users can change 
> the permissions of files/folders



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14899) Restrict Access to setPermission operation when authorization is enabled in WASB

2017-09-22 Thread Kannapiran Srinivasan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14899?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kannapiran Srinivasan updated HADOOP-14899:
---
Summary: Restrict Access to setPermission operation when authorization is 
enabled in WASB  (was: Restrict Access to set stickybit operation when 
authorization is enabled in WASB)

> Restrict Access to setPermission operation when authorization is enabled in 
> WASB
> 
>
> Key: HADOOP-14899
> URL: https://issues.apache.org/jira/browse/HADOOP-14899
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/azure
>Reporter: Kannapiran Srinivasan
>  Labels: fs, secure, wasb
>
> In case of authorization enabled Wasb clusters, we need to restrict setting 
> permissions on files or folders to owner or list of privileged users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14899) Restrict Access to set stickybit operation when authorization is enabled in WASB

2017-09-22 Thread Kannapiran Srinivasan (JIRA)
Kannapiran Srinivasan created HADOOP-14899:
--

 Summary: Restrict Access to set stickybit operation when 
authorization is enabled in WASB
 Key: HADOOP-14899
 URL: https://issues.apache.org/jira/browse/HADOOP-14899
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs/azure
Reporter: Kannapiran Srinivasan


In case of authorization enabled Wasb clusters, we need to restrict setting 
permissions on files or folders to owner or list of privileged users.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176260#comment-16176260
 ] 

Hadoop QA commented on HADOOP-14768:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 7 
new + 24 unchanged - 1 fixed = 31 total (was 25) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
1s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:71bbb86 |
| JIRA Issue | HADOOP-14768 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12888478/HADOOP-14768.005.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d8a7a8e5b786 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / c71d137 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13355/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13355/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/13355/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  

[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14768:
--
Status: Patch Available  (was: Open)

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14768) Honoring sticky bit during Deletion when authorization is enabled in WASB

2017-09-22 Thread Varada Hemeswari (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varada Hemeswari updated HADOOP-14768:
--
Status: Open  (was: Patch Available)

> Honoring sticky bit during Deletion when authorization is enabled in WASB
> -
>
> Key: HADOOP-14768
> URL: https://issues.apache.org/jira/browse/HADOOP-14768
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Varada Hemeswari
>Assignee: Varada Hemeswari
>  Labels: fs, secure, wasb
> Attachments: HADOOP-14768.001.patch, HADOOP-14768.002.patch, 
> HADOOP-14768.003.patch, HADOOP-14768.003.patch, HADOOP-14768.004.patch, 
> HADOOP-14768.004.patch, HADOOP-14768.005.patch
>
>
> When authorization is enabled in WASB filesystem, there is a need for 
> stickybit in cases where multiple users can create files under a shared 
> directory. This additional check for sticky bit is reqired since any user can 
> delete another user's file because the parent has WRITE permission for all 
> users.
> The purpose of this jira is to implement sticky bit equivalent for 'delete' 
> call when authorization is enabled.
> Note : Sticky bit implementation for 'Rename' operation is not done as part 
> of this JIRA



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14872) CryptoInputStream should implement unbuffer

2017-09-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16176240#comment-16176240
 ] 

Steve Loughran commented on HADOOP-14872:
-

This is *exactly* what I'm thinking of!

# we should add a comment to CanUnbuffer saying "if you implement this then do 
stream capabilities"
# And that crypto test of yours should probe the stream capabilities in an 
assert to make sure it works

now that we are adding read-stream capabilities, maybe we should define the 
naming scheme here. I'm thinking
* Any filesystem must prefix with their schema, e.g. "s3a", "adl" for a 
schema-specific option
* And we could use "in:" for input options
* no prefix: output, FS independent

> CryptoInputStream should implement unbuffer
> ---
>
> Key: HADOOP-14872
> URL: https://issues.apache.org/jira/browse/HADOOP-14872
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.6.4
>Reporter: John Zhuge
>Assignee: John Zhuge
> Attachments: HADOOP-14872.001.patch, HADOOP-14872.002.patch, 
> HADOOP-14872.003.patch, HADOOP-14872.004.patch
>
>
> Discovered in IMPALA-5909.
> Opening an encrypted HDFS file returns a chain of wrapped input streams:
> {noformat}
> HdfsDataInputStream
>   CryptoInputStream
> DFSInputStream
> {noformat}
> If an application such as Impala or HBase calls HdfsDataInputStream#unbuffer, 
> FSDataInputStream#unbuffer will be called:
> {code:java}
> try {
>   ((CanUnbuffer)in).unbuffer();
> } catch (ClassCastException e) {
>   throw new UnsupportedOperationException("this stream does not " +
>   "support unbuffering.");
> }
> {code}
> If the {{in}} class does not implement CanUnbuffer, UOE will be thrown. If 
> the application is not careful, tons of UOEs will show up in logs.
> In comparison, opening an non-encrypted HDFS file returns this chain:
> {noformat}
> HdfsDataInputStream
>   DFSInputStream
> {noformat}
> DFSInputStream implements CanUnbuffer.
> It is good for CryptoInputStream to implement CanUnbuffer for 2 reasons:
> * Release buffer, cache, or any other resource when instructed
> * Able to call its wrapped DFSInputStream unbuffer
> * Avoid the UOE described above. Applications may not handle the UOE very 
> well.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >