[jira] [Updated] (HADOOP-12294) Invalid fs.permissions.umask-mode setting should throw an error

2015-08-03 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe updated HADOOP-12294:

 Summary: Invalid fs.permissions.umask-mode setting should throw an 
error  (was: propagate parsing configuration error to client)
Target Version/s: 3.0.0
 Component/s: conf

To me this looks like a bug introduced by HADOOP-6521.  It has the confusing 
behavior that if one tries unsuccessfully to set the new property but the old 
property has a valid setting, rather than complaining about the bad value it 
silently falls back to the old property setting.  So users can think they set 
the property when in reality that setting was ignored unless they watched the 
logs very closely to notice the warning message.

However it's been this way for a very long time, which makes it hard to reason 
whether anyone has relied on this unintuitive behavior intentionally or 
otherwise.  So in that sense I agree with zhihai and Allen that we can avoid 
any potential breakage in 2.x and propose we fix this for 3.x.

 Invalid fs.permissions.umask-mode setting should throw an error
 ---

 Key: HADOOP-12294
 URL: https://issues.apache.org/jira/browse/HADOOP-12294
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12294.2.patch, HADOOP-12294.patch


 provide better visibility of parsing configuration failure by logging full 
 error message and propagate error message of parsing configuration back to 
 client



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12286:

Attachment: HADOOP-12286.HADOOP-12111.00.patch

Attaching a patch. To make this modification more generic, I implemented 
PYLINT_OPTIONS instead of PYLINT_INDENT_STRING.

--indent-string no longer exists in pylint.sh, but the test patch succeeds 
because it is defined in hadoop personality as PYLINT_OPTIONS:

{code}
[sekikn@mobile hadoop]$ grep indent-string dev-support/test-patch.d/pylint.sh 
[sekikn@mobile hadoop]$ cat /tmp/test.patch 
diff --git a/dev-support/shelldocs.py b/dev-support/shelldocs.py
index fc7601a..96363a0 100755
--- a/dev-support/shelldocs.py
+++ b/dev-support/shelldocs.py
@@ -268,4 +268,4 @@ def main():
 
 if __name__ == __main__:
   main()
-
+  print
[sekikn@mobile hadoop]$ dev-support/test-patch.sh 
--basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch 

(snip)

| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  +1  |asflicense  |  0m 22s| Patch does not generate ASF License 
|  ||| warnings.
|  +1  |pylint  |  0m 03s| There were no new pylint issues. 
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  ||  0m 26s| 
{code}

Without PYLINT_OPTIONS, this patch fails because the default indent is 4 
whitespaces:

{code}
[sekikn@mobile hadoop]$ grep PYLINT_OPTIONS dev-support/personality/hadoop.sh 
PYLINT_OPTIONS=--indent-string='  '
[sekikn@mobile hadoop]$ sed -i -e 's/\(PYLINT_OPTIONS\)/#\1/' 
dev-support/personality/hadoop.sh 
[sekikn@mobile hadoop]$ grep PYLINT_OPTIONS dev-support/personality/hadoop.sh 
#PYLINT_OPTIONS=--indent-string='  '
[sekikn@mobile hadoop]$ dev-support/test-patch.sh 
--basedir=/Users/sekikn/dev/hadoop --project=hadoop /tmp/test.patch 

(snip)

| Vote |  Subsystem |  Runtime   | Comment

|  +1  |   @author  |  0m 00s| The patch does not contain any @author 
|  ||| tags.
|  +1  |asflicense  |  0m 22s| Patch does not generate ASF License 
|  ||| warnings.
|  -1  |pylint  |  0m 02s| The applied patch generated 1 new pylint 
|  ||| issues (total was 318, now 319).
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  ||  0m 25s| 


|| Subsystem || Report/Notes ||

| git revision | HADOOP-12111 / edaf238 |
| Optional Tests | asflicense pylint |
| uname | Darwin mobile.local 14.4.0 Darwin Kernel Version 14.4.0: Thu May 28 
11:35:04 PDT 2015; root:xnu-2782.30.5~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | /Users/sekikn/hadoop/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_80 |
| pylint | v1.4.4 |
| pylint | /private/tmp/test-patch-hadoop/25180/diff-patch-pylint.txt |
| Max memory used | 47MB |

(snip)

[sekikn@mobile hadoop]$ cat 
/private/tmp/test-patch-hadoop/25180/diff-patch-pylint.txt
dev-support/shelldocs.py:271: [W0311(bad-indentation), ] Bad indentation. Found 
2 spaces, expected 4
{code}

Also, meaningless awk '-F:' options are removed in this patch.

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12286:

Assignee: Kengo Seki
  Status: Patch Available  (was: Open)

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14651926#comment-14651926
 ] 

Hadoop QA commented on HADOOP-12286:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7401/console in case of 
problems.

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14651927#comment-14651927
 ] 

Hadoop QA commented on HADOOP-12286:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 25s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748462/HADOOP-12286.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / edaf238 |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7401/console |


This message was automatically generated.

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Kengo Seki
Assignee: Kengo Seki
 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12247:
--
Attachment: HADOOP-12247.HADOOP-12111.02.patch

-02:
* fix the shellcheck nit.  Apparently both forms are supported, but consistency 
would be good. :)

yeah, we should probably clean all of those up in one go.

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch, HADOOP-12247.HADOOP-12111.02.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652646#comment-14652646
 ] 

Allen Wittenauer commented on HADOOP-12296:
---

Is throwing an exception really the correct thing to do when a netgroup doesn't 
exist?  That seems particularly drastic.

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.2.patch, HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652664#comment-14652664
 ] 

Sean Busbey commented on HADOOP-12247:
--

+1

nits:

{code}
+  # shellcheck disable=2016
{code}

should be {{# shellcheck disable=SC2016}} ?

{code}
+JUNIT_FAILED_TESTS=${JUNIT_FAILED_TESTS} ${module_failed_tests}
...SNIP...
+# shellcheck disable=SC2086
+populate_test_table ${jdk}Failed junit tests ${JUNIT_FAILED_TESTS}
{code}

We could be using arrays to avoid needing a shellcheck disable here, I think? 
We do this in plenty of places, so I don't think it needs to get fixed in this 
patch.

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12297) test-patch docker mode fails if patch-dir is not specified or specified as an absolute path

2015-08-03 Thread Kengo Seki (JIRA)
Kengo Seki created HADOOP-12297:
---

 Summary: test-patch docker mode fails if patch-dir is not 
specified or specified as an absolute path
 Key: HADOOP-12297
 URL: https://issues.apache.org/jira/browse/HADOOP-12297
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki


Docker mode without a patch-dir option or with an absolute path seems not to 
work:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh 
--basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker /tmp/test.patch

(snip)

Successfully built 37438de64e81
JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home does 
not exist. Dockermode: attempting to switch to another.
/testptch/launch-test-patch.sh: line 42: cd: /testptch/patchprocess/precommit/: 
No such file or directory
/testptch/launch-test-patch.sh: line 45: 
/testptch/patchprocess/precommit/test-patch.sh: No such file or directory
{code}

It succeeds if a relative directory is specified:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh 
--basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker --patch-dir=foo 
/tmp/test.patch

(snip)

Successfully built 6ea5001987a7
JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home does 
not exist. Dockermode: attempting to switch to another.




Bootstrapping test harness



(snip)

+1 overall

(snip)



  Finished build.


{code}

If my setup or usage is wrong, please close this JIRA as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652719#comment-14652719
 ] 

Hadoop QA commented on HADOOP-12296:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |   5m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | release audit |   0m 18s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 30s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  22m 12s | Tests passed in 
hadoop-common. |
| | |  37m 25s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748541/HADOOP-12296.2.patch |
| Optional Tests | javac unit |
| git revision | trunk / 469cfcd |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7405/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7405/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7405/console |


This message was automatically generated.

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.2.patch, HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12282) Connection thread's name should be updated after address changing is detected

2015-08-03 Thread zhouyingchao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652938#comment-14652938
 ] 

zhouyingchao commented on HADOOP-12282:
---

Any comments?

 Connection thread's name should be updated after address changing is detected
 -

 Key: HADOOP-12282
 URL: https://issues.apache.org/jira/browse/HADOOP-12282
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: zhouyingchao
Assignee: zhouyingchao
 Attachments: HADOOP-12282-001.patch


 In a hadoop hdfs cluster, I changed the standby Namenode's ip address (the 
 hostname is not changed and the routing tables are updated). After the 
 change, the cluster is running as normal.
 However, I found that the debug message of datanode's IPC still prints the 
 original ip address. By looking into the implementation, it turns out that 
 the original address is used as the thread's name.  I think the thread's name 
 should be changed if the address change is detected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12297) test-patch docker mode fails if patch-dir is not specified or specified as an absolute path

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652766#comment-14652766
 ] 

Allen Wittenauer commented on HADOOP-12297:
---

There's a very good chance I broke it at some point. :(

 test-patch docker mode fails if patch-dir is not specified or specified as an 
 absolute path
 ---

 Key: HADOOP-12297
 URL: https://issues.apache.org/jira/browse/HADOOP-12297
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki

 Docker mode without a patch-dir option or with an absolute path seems not to 
 work:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker /tmp/test.patch
 (snip)
 Successfully built 37438de64e81
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 /testptch/launch-test-patch.sh: line 42: cd: 
 /testptch/patchprocess/precommit/: No such file or directory
 /testptch/launch-test-patch.sh: line 45: 
 /testptch/patchprocess/precommit/test-patch.sh: No such file or directory
 {code}
 It succeeds if a relative directory is specified:
 {code}
 [sekikn@mobile hadoop]$ dev-support/test-patch.sh 
 --basedir=/Users/sekikn/dev/hadoop --project=hadoop --docker --patch-dir=foo 
 /tmp/test.patch
 (snip)
 Successfully built 6ea5001987a7
 JAVA_HOME: /Library/Java/JavaVirtualMachines/jdk1.7.0_80.jdk/Contents/Home 
 does not exist. Dockermode: attempting to switch to another.
 
 
 Bootstrapping test harness
 
 
 (snip)
 +1 overall
 (snip)
 
 
   Finished build.
 
 
 {code}
 If my setup or usage is wrong, please close this JIRA as invalid.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12270) builtin personality is too hadoop specific

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652986#comment-14652986
 ] 

Allen Wittenauer commented on HADOOP-12270:
---

I'm going to try and fix some of this under HADOOP-12248 since I'm already in 
the neighborhood, but I doubt it will be complete.  So leaving this open for 
now.

 builtin personality is too hadoop specific
 --

 Key: HADOOP-12270
 URL: https://issues.apache.org/jira/browse/HADOOP-12270
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer

 As I work on TAP support and getting Hadoop to use it for shell unit tests, 
 I'm finding that the builtin personality is way too Hadoop (and maybe Apache) 
 specific.
 For example, if test-patch sees a .c file touched, why is it adding a javac 
 test?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-3182) JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx)

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-3182:
-
Release Note: Changed job-dir from 733 to 777, so that a shared 
JobTracker can be started by a non-superuser account.  (was: Changed 
\job-dir\ from 733 to 777, so that a shared JobTracker can be started by a 
non-superuser account.)

 JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx)
 --

 Key: HADOOP-3182
 URL: https://issues.apache.org/jira/browse/HADOOP-3182
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.16.2
Reporter: Lohit Vijayarenu
Assignee: Tsz Wo Nicholas Sze
Priority: Blocker
 Fix For: 0.16.3

 Attachments: 3182_20080408.patch, 3182_20080408.patch, 
 3182_20080408_0.16.patch, HADOOP-3182_2_20080410.patch, 
 HADOOP-3182_2_20080410_0.16.patch, patch-3182.txt


 JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx ) which 
 causes problem while sharing a cluster.
 Consider the case where userA starts jobtracker/tasktrackers and userB 
 submits a job to this cluster. When userB creates submitJobDir it is created 
 with rwx-wx-wx which cannot be read by tasktracker started by userA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-1985) Abstract node to switch mapping into a topology service class used by namenode and jobtracker

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-1985:
-
Release Note: 
This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode  JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(List\String\ 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack is returned for all 
dns-names/IP-addresses. The loadable topology.node.switch.mapping.impl provides 
administrators fleixibilty to define how their site's node resolution should 
happen.
For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.

  was:
This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode  JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(ListString 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack is returned for all 
dns-names/IP-addresses. The loadable topology.node.switch.mapping.impl provides 
administrators fleixibilty to define how their site's node resolution should 
happen.
For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.


 Abstract node to switch mapping into a topology service class used by 
 namenode and jobtracker
 -

 Key: HADOOP-1985
 URL: https://issues.apache.org/jira/browse/HADOOP-1985
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: eric baldeschwieler
Assignee: Devaraj Das
 Fix For: 0.17.0

 Attachments: 1985.new.patch, 1985.v1.patch, 1985.v10.patch, 
 1985.v11.patch, 1985.v19.patch, 1985.v2.patch, 1985.v20.patch, 
 1985.v23.patch, 1985.v24.patch, 1985.v25.patch, 1985.v3.patch, 1985.v4.patch, 
 1985.v5.patch, 1985.v6.patch, 1985.v9.patch, jobinprogress.patch


 In order to implement switch locality in MapReduce, we need to have switch 
 location in both the namenode and job tracker.  Currently the namenode asks 
 the data nodes for this info and they run a local script to answer this 
 question.  In our environment and others that I know of there is no reason to 
 push this to each node.  It is easier to maintain a centralized script that 
 maps node DNS names to switch strings.
 I propose that we build a new class that caches known DNS name to switch 
 mappings and invokes a loadable class or a configurable system call to 
 resolve unknown DNS to switch mappings.  We can then add this to the namenode 
 to support the current block to switch mapping needs and simplify the data 
 nodes.  We can also add this same callout to the job tracker and then 
 implement rack locality logic there without needing to chane the filesystem 
 API or the split planning API.
 Not 

[jira] [Updated] (HADOOP-1985) Abstract node to switch mapping into a topology service class used by namenode and jobtracker

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-1985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-1985:
-
Release Note: 
This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode  JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(ListString 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack is returned for all 
dns-names/IP-addresses. The loadable topology.node.switch.mapping.impl provides 
administrators fleixibilty to define how their site's node resolution should 
happen.
For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.

  was:
This issue introduces rack awareness for map tasks. It also moves the rack 
resolution logic to the central servers - NameNode  JobTracker. The 
administrator can specify a loadable class given by 
topology.node.switch.mapping.impl to specify the class implementing the logic 
for rack resolution. The class must implement a method - resolve(List\String\ 
names), where names is the list of DNS-names/IP-addresses that we want 
resolved. The return value is a list of resolved network paths of the form 
/foo/rack, where rack is the rackID where the node belongs to and foo is the 
switch where multiple racks are connected, and so on. The default 
implementation of this class is packaged along with hadoop and points to 
org.apache.hadoop.net.ScriptBasedMapping and this class loads a script that can 
be used for rack resolution. The script location is configurable. It is 
specified by topology.script.file.name and defaults to an empty script. In the 
case where the script name is empty, /default-rack is returned for all 
dns-names/IP-addresses. The loadable topology.node.switch.mapping.impl provides 
administrators fleixibilty to define how their site's node resolution should 
happen.
For mapred, one can also specify the level of the cache w.r.t the number of 
levels in the resolved network path - defaults to two. This means that the 
JobTracker will cache tasks at the host level and at the rack level. 
Known issue: the task caching will not work with levels greater than 2 (beyond 
racks). This bug is tracked in HADOOP-3296.


 Abstract node to switch mapping into a topology service class used by 
 namenode and jobtracker
 -

 Key: HADOOP-1985
 URL: https://issues.apache.org/jira/browse/HADOOP-1985
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: eric baldeschwieler
Assignee: Devaraj Das
 Fix For: 0.17.0

 Attachments: 1985.new.patch, 1985.v1.patch, 1985.v10.patch, 
 1985.v11.patch, 1985.v19.patch, 1985.v2.patch, 1985.v20.patch, 
 1985.v23.patch, 1985.v24.patch, 1985.v25.patch, 1985.v3.patch, 1985.v4.patch, 
 1985.v5.patch, 1985.v6.patch, 1985.v9.patch, jobinprogress.patch


 In order to implement switch locality in MapReduce, we need to have switch 
 location in both the namenode and job tracker.  Currently the namenode asks 
 the data nodes for this info and they run a local script to answer this 
 question.  In our environment and others that I know of there is no reason to 
 push this to each node.  It is easier to maintain a centralized script that 
 maps node DNS names to switch strings.
 I propose that we build a new class that caches known DNS name to switch 
 mappings and invokes a loadable class or a configurable system call to 
 resolve unknown DNS to switch mappings.  We can then add this to the namenode 
 to support the current block to switch mapping needs and simplify the data 
 nodes.  We can also add this same callout to the job tracker and then 
 implement rack locality logic there without needing to chane the filesystem 
 API or the split planning API.
 Not 

[jira] [Updated] (HADOOP-2818) Remove deprecated Counters.getDisplayName(), getCounterNames(), getCounter(String counterName)

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-2818:
-
Release Note: 
The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
public synchronized CollectionString 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.


  was:
The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
public synchronized Collection\String\ 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.



 Remove deprecated Counters.getDisplayName(),  getCounterNames(),   
 getCounter(String counterName) 
 --

 Key: HADOOP-2818
 URL: https://issues.apache.org/jira/browse/HADOOP-2818
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.16.0
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.17.0

 Attachments: patch-2818.txt


 Counters.getDisplayName(),  getCounterNames(),   getCounter(String 
 counterName)  need to removed as they are deprecated in 0.16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-2410) Make EC2 cluster nodes more independent of each other

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-2410:
-
Release Note: The command hadoop-ec2 run has been replaced by hadoop-ec2 
launch-cluster group number of instances, and hadoop-ec2 start-hadoop 
has been removed since Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.  (was: The command 
hadoop-ec2 run has been replaced by hadoop-ec2 launch-cluster \group\ 
\number of instances\, and hadoop-ec2 start-hadoop has been removed since 
Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.)

 Make EC2 cluster nodes more independent of each other
 -

 Key: HADOOP-2410
 URL: https://issues.apache.org/jira/browse/HADOOP-2410
 Project: Hadoop Common
  Issue Type: Improvement
  Components: contrib/cloud
Affects Versions: 0.16.1
Reporter: Tom White
Assignee: Chris K Wensel
 Fix For: 0.17.0

 Attachments: concurrent-clusters-2.patch, 
 concurrent-clusters-3.patch, concurrent-clusters.patch, ec2.tgz


 The cluster start up scripts currently wait for each node to start up before 
 appointing a master (to run the namenode and jobtracker on), and copying 
 private keys to all the nodes, and writing the private IP address of the 
 master to the hadoop-site.xml file (which is then copied to the slaves via 
 rsync). Only once this is all done is hadoop started on the cluster (from the 
 master). This can fail if any of the nodes fails to come up, which can happen 
 as EC2 doesn't guarantee that you get a cluster of the size you ask for (I've 
 seen this happen).
 The process would be more robust if each node was told the address of the 
 master as user metadata and then started its own daemons. This is complicated 
 by the fact that the public DNS alias of the master resolves to a public IP 
 address so cannot be used by EC2 nodes (see 
 http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/instance-addressing.html).
  Instead we need to use a trick 
 (http://developer.amazonwebservices.com/connect/message.jspa?messageID=71126#71126)
  to find the private IP, and what's more we need to attempt to resolve the 
 private IP in a loop until it is available since the DNS will only be set up 
 after the master has started.
 This change will also mean the private key doesn't need to be copied to each 
 node, which can be slow and has dubious security. Configuration can be 
 handled using the mechanism described in HADOOP-2409.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-3182) JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx)

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-3182:
-
Release Note: Changed \job-dir\ from 733 to 777, so that a shared 
JobTracker can be started by a non-superuser account.  (was: Changed job-dir 
from 733 to 777, so that a shared JobTracker can be started by a non-superuser 
account.)

 JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx)
 --

 Key: HADOOP-3182
 URL: https://issues.apache.org/jira/browse/HADOOP-3182
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.16.2
Reporter: Lohit Vijayarenu
Assignee: Tsz Wo Nicholas Sze
Priority: Blocker
 Fix For: 0.16.3

 Attachments: 3182_20080408.patch, 3182_20080408.patch, 
 3182_20080408_0.16.patch, HADOOP-3182_2_20080410.patch, 
 HADOOP-3182_2_20080410_0.16.patch, patch-3182.txt


 JobClient creates submitJobDir with SYSTEM_DIR_PERMISSION ( rwx-wx-wx ) which 
 causes problem while sharing a cluster.
 Consider the case where userA starts jobtracker/tasktrackers and userB 
 submits a job to this cluster. When userB creates submitJobDir it is created 
 with rwx-wx-wx which cannot be read by tasktracker started by userA



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-2818) Remove deprecated Counters.getDisplayName(), getCounterNames(), getCounter(String counterName)

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-2818:
-
Release Note: 
The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
public synchronized Collection\String\ 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.


  was:
The deprecated methods public String 
org.apache.hadoop.mapred.Counters.getDisplayName(String counter) and 
public synchronized CollectionString 
org.apache.hadoop.mapred.Counters.getCounterNames() are removed.
The deprecated method public synchronized long 
org.apache.hadoop.mapred.Counters.getCounter(String counterName) is 
undeprecated.



 Remove deprecated Counters.getDisplayName(),  getCounterNames(),   
 getCounter(String counterName) 
 --

 Key: HADOOP-2818
 URL: https://issues.apache.org/jira/browse/HADOOP-2818
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.16.0
Reporter: Amareshwari Sriramadasu
Assignee: Amareshwari Sriramadasu
 Fix For: 0.17.0

 Attachments: patch-2818.txt


 Counters.getDisplayName(),  getCounterNames(),   getCounter(String 
 counterName)  need to removed as they are deprecated in 0.16.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7824) NativeIO.java flags and identifiers must be set correctly for each platform, not hardcoded to their Linux values

2015-08-03 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653086#comment-14653086
 ] 

Vinayakumar B commented on HADOOP-7824:
---

After this, I started facing compile errors in NativeIO.c in windows, similar 
to below,
{noformat}  src\org\apache\hadoop\io\nativeio\NativeIO.c(145): error C2065: 
'O_RDONLY' : undeclared identifier [C:\work\hadoop\mai
n\hadoop-common-project\hadoop-common\src\main\native\native.vcxproj]{noformat}

I am not JNI expert, but seeing the code change I understood that, from JNI 
trying to set the value of the constant in the Java class. And this constant 
name must be passed to {{SET_INT_OR_RETURN}}. So enclosing all these constants 
names in quotes solved the compilation error for me. Just followed similar to 
{{setStaticBoolean(env, clazz, fadvisePossible, JNI_TRUE);}}

Please someone with JNI expertise confirm whether this changes are correct?

If so, I will raise a Jira and provide the patch to fix the compilation.

 NativeIO.java flags and identifiers must be set correctly for each platform, 
 not hardcoded to their Linux values
 

 Key: HADOOP-7824
 URL: https://issues.apache.org/jira/browse/HADOOP-7824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: 0.20.204.0, 0.20.205.0, 1.0.3, 0.23.0, 2.0.0-alpha, 3.0.0
 Environment: Mac OS X, Linux, Solaris, Windows, ... 
Reporter: Dmytro Shteflyuk
Assignee: Martin Walsh
  Labels: hadoop
 Fix For: 2.8.0

 Attachments: HADOOP-7824.001.patch, HADOOP-7824.002.patch, 
 HADOOP-7824.003.patch, HADOOP-7824.004.patch, HADOOP-7824.patch, 
 HADOOP-7824.patch, hadoop-7824.txt


 NativeIO.java flags and identifiers must be set correctly for each platform, 
 not hardcoded to their Linux values.  Constants like O_CREAT, O_EXCL, etc. 
 have different values on OS X and many other operating systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12298) releasedocmaker isn't translating greater than/less than signs in releasenotes

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653039#comment-14653039
 ] 

Allen Wittenauer commented on HADOOP-12298:
---

Or maybe a flag to toggle it.  

From this point onward,  and  will be valid meta chars.  I kind of like 
that idea.

 releasedocmaker isn't translating greater than/less than signs in releasenotes
 --

 Key: HADOOP-12298
 URL: https://issues.apache.org/jira/browse/HADOOP-12298
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Priority: Blocker

 Somewhere along the way, releasedocmaker stopped translating greater than and 
 less than signs in release notes.  mvn site blows up when it comes across a 
 broken one.  github drops the word entirely when rendering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12247:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

Thanks for the review!

Committed.

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: HADOOP-12111

 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch, HADOOP-12247.HADOOP-12111.02.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Fix Version/s: (was: 3.0.0)

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12298) releasedocmaker isn't translating greater than/less than signs in releasenotes

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12298?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653025#comment-14653025
 ] 

Allen Wittenauer commented on HADOOP-12298:
---

Or, maybe we just need to say we treat  and  as valid chars in markdown 
because you may want to translate that to HTML intentionally.  It's in the 
notableclean code (i.e., changes file) but not in the tableclean (i.e., 
releasenotes) so clearly this must have been hit at some point.  But I'm now 
doubting whether that was the correct decision.  At least, attempting to build 
Hadoop's release notes from day 1 is hitting all kinds of problems mainly due 
to  and .

 releasedocmaker isn't translating greater than/less than signs in releasenotes
 --

 Key: HADOOP-12298
 URL: https://issues.apache.org/jira/browse/HADOOP-12298
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Reporter: Allen Wittenauer
Priority: Blocker

 Somewhere along the way, releasedocmaker stopped translating greater than and 
 less than signs in release notes.  mvn site blows up when it comes across a 
 broken one.  github drops the word entirely when rendering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12298) releasedocmaker isn't translating greater than/less than signs in releasenotes

2015-08-03 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12298:
-

 Summary: releasedocmaker isn't translating greater than/less than 
signs in releasenotes
 Key: HADOOP-12298
 URL: https://issues.apache.org/jira/browse/HADOOP-12298
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer
Priority: Blocker


Somewhere along the way, releasedocmaker stopped translating greater than and 
less than signs in release notes.  mvn site blows up when it comes across a 
broken one.  github drops the word entirely when rendering.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653055#comment-14653055
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

HADOOP-11791 has a patch to include the old versions in the appropriate place.  
When it is run again, --index should kick off and provide the necessary links.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 2.7.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 2.8.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-2410) Make EC2 cluster nodes more independent of each other

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-2410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-2410:
-
Release Note: The command hadoop-ec2 run has been replaced by hadoop-ec2 
launch-cluster \group\ \number of instances\, and hadoop-ec2 
start-hadoop has been removed since Hadoop is started on instance start up. 
See http://wiki.apache.org/hadoop/AmazonEC2 for details.  (was: The command 
hadoop-ec2 run has been replaced by hadoop-ec2 launch-cluster group 
number of instances, and hadoop-ec2 start-hadoop has been removed since 
Hadoop is started on instance start up. See 
http://wiki.apache.org/hadoop/AmazonEC2 for details.)

 Make EC2 cluster nodes more independent of each other
 -

 Key: HADOOP-2410
 URL: https://issues.apache.org/jira/browse/HADOOP-2410
 Project: Hadoop Common
  Issue Type: Improvement
  Components: contrib/cloud
Affects Versions: 0.16.1
Reporter: Tom White
Assignee: Chris K Wensel
 Fix For: 0.17.0

 Attachments: concurrent-clusters-2.patch, 
 concurrent-clusters-3.patch, concurrent-clusters.patch, ec2.tgz


 The cluster start up scripts currently wait for each node to start up before 
 appointing a master (to run the namenode and jobtracker on), and copying 
 private keys to all the nodes, and writing the private IP address of the 
 master to the hadoop-site.xml file (which is then copied to the slaves via 
 rsync). Only once this is all done is hadoop started on the cluster (from the 
 master). This can fail if any of the nodes fails to come up, which can happen 
 as EC2 doesn't guarantee that you get a cluster of the size you ask for (I've 
 seen this happen).
 The process would be more robust if each node was told the address of the 
 master as user metadata and then started its own daemons. This is complicated 
 by the fact that the public DNS alias of the master resolves to a public IP 
 address so cannot be used by EC2 nodes (see 
 http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/instance-addressing.html).
  Instead we need to use a trick 
 (http://developer.amazonwebservices.com/connect/message.jspa?messageID=71126#71126)
  to find the private IP, and what's more we need to attempt to resolve the 
 private IP in a loop until it is available since the DNS will only be set up 
 after the master has started.
 This change will also mean the private key doesn't need to be copied to each 
 node, which can be slow and has dubious security. Configuration can be 
 handled using the mechanism described in HADOOP-2409.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14653050#comment-14653050
 ] 

Allen Wittenauer commented on HADOOP-11791:
---

Woops, I guess that should be 02. 

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch, HADOOP-11791.01.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11791) Update src/site/markdown/releases to include old versions of Hadoop

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11791:
--
Attachment: HADOOP-11791.01.patch

-01:
* all of the old versions
* does not include 2.7.2, 2.6.1, 2.8.0, or 3.0.0
* some hand-munging in order to make site build to work around some missing 
features in releasedocmaker.

 Update src/site/markdown/releases to include old versions of Hadoop
 ---

 Key: HADOOP-11791
 URL: https://issues.apache.org/jira/browse/HADOOP-11791
 Project: Hadoop Common
  Issue Type: Task
  Components: build, documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11791.001.patch, HADOOP-11791.01.patch


 With the commit of HADOOP-11731, we need to include the new format of release 
 information in trunk.  This JIRA is about including those old versions in the 
 tree.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HADOOP-12296:
--
Attachment: HADOOP-12296.2.patch

[~aw] thanks for review. Uploaded .2 patch which will only cover Linux. 

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.2.patch, HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12292) Make use of DeleteObjects optional

2015-08-03 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12292:
---
Attachment: HADOOP-12292-001.patch

 Make use of DeleteObjects optional
 --

 Key: HADOOP-12292
 URL: https://issues.apache.org/jira/browse/HADOOP-12292
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
Assignee: Thomas Demoor
 Attachments: HADOOP-12292-001.patch


 The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was 
 added later. This patch allows one to configure s3a to replace each 
 multidelete request by consecutive single deletes. Evidently, this setting is 
 disabled by default as this causes slower deletes.
 The main motivation is to enable legacy S3-compatible object stores to make 
 the transition from s3n (which does not use multidelete) to s3a, fully 
 allowing the planned s3n deprecation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12292) Make use of DeleteObjects optional

2015-08-03 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-12292:
---
Status: Patch Available  (was: Open)

 Make use of DeleteObjects optional
 --

 Key: HADOOP-12292
 URL: https://issues.apache.org/jira/browse/HADOOP-12292
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
Assignee: Thomas Demoor
 Attachments: HADOOP-12292-001.patch


 The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was 
 added later. This patch allows one to configure s3a to replace each 
 multidelete request by consecutive single deletes. Evidently, this setting is 
 disabled by default as this causes slower deletes.
 The main motivation is to enable legacy S3-compatible object stores to make 
 the transition from s3n (which does not use multidelete) to s3a, fully 
 allowing the planned s3n deprecation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12292) Make use of DeleteObjects optional

2015-08-03 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14651506#comment-14651506
 ] 

Thomas Demoor commented on HADOOP-12292:


Oh, now I get what you ([~ste...@apache.org] and [~ndimiduk]) meant by TTL: 
Object expiration through bucket lifecylces.

Not sure that approach is easy, there are several non-trivial issues. Some that 
immediately come to mind:
* You are limited to 1000 policy rules per bucket
* Prefix based
{{PUT Object: mybucket/object}} - write a file
{{PUT Bucket lifecycle: mybucket, Expiration, 1 day, prefix=object}} - 
asynchronously delete this file
{{PUT Object: mybucket/object2}} - write another file
The next day BOTH files are automatically deleted (prefix!!!)
Also, all future writes which share the prefix will also be deleted 
automatically after a day.



 Make use of DeleteObjects optional
 --

 Key: HADOOP-12292
 URL: https://issues.apache.org/jira/browse/HADOOP-12292
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
Assignee: Thomas Demoor

 The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was 
 added later. This patch allows one to configure s3a to replace each 
 multidelete request by consecutive single deletes. Evidently, this setting is 
 disabled by default as this causes slower deletes.
 The main motivation is to enable legacy S3-compatible object stores to make 
 the transition from s3n (which does not use multidelete) to s3a, fully 
 allowing the planned s3n deprecation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12292) Make use of DeleteObjects optional

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14651573#comment-14651573
 ] 

Hadoop QA commented on HADOOP-12292:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  20m 22s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 39s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 26s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | site |   3m  1s | Site still builds. |
| {color:red}-1{color} | checkstyle |   1m 28s | The applied patch generated  2 
new checkstyle issues (total was 62, now 63). |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 35s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   2m 55s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 11s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | tools/hadoop tests |   0m 13s | Tests passed in 
hadoop-aws. |
| | |  70m 52s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748402/HADOOP-12292-001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle site |
| git revision | trunk / 90b5104 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7398/artifact/patchprocess/diffcheckstylehadoop-aws.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7398/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-aws test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7398/artifact/patchprocess/testrun_hadoop-aws.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7398/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf904.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7398/console |


This message was automatically generated.

 Make use of DeleteObjects optional
 --

 Key: HADOOP-12292
 URL: https://issues.apache.org/jira/browse/HADOOP-12292
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Reporter: Thomas Demoor
Assignee: Thomas Demoor
 Attachments: HADOOP-12292-001.patch


 The {{DeleteObjectsRequest}} was not part of the initial S3 API, but was 
 added later. This patch allows one to configure s3a to replace each 
 multidelete request by consecutive single deletes. Evidently, this setting is 
 disabled by default as this causes slower deletes.
 The main motivation is to enable legacy S3-compatible object stores to make 
 the transition from s3n (which does not use multidelete) to s3a, fully 
 allowing the planned s3n deprecation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12280) Skip unit tests based on maven profile rather than NativeCodeLoader.isNativeCodeLoaded

2015-08-03 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12280:
--
Description: Some tests are skipped if native code is not loaded (i.e. 
NativeCodeLoader.isNativeCodeLoaded() returns false). Skipping the test or not 
should be judged from Maven profile rather than isNativeCodeLoaded() because 
tests should fail if native libraries are misplaced by invalid configuration in 
native profile.

 Skip unit tests based on maven profile rather than 
 NativeCodeLoader.isNativeCodeLoaded
 --

 Key: HADOOP-12280
 URL: https://issues.apache.org/jira/browse/HADOOP-12280
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki

 Some tests are skipped if native code is not loaded (i.e. 
 NativeCodeLoader.isNativeCodeLoaded() returns false). Skipping the test or 
 not should be judged from Maven profile rather than isNativeCodeLoaded() 
 because tests should fail if native libraries are misplaced by invalid 
 configuration in native profile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12280) Skip unit tests based on maven profile rather than NativeCodeLoader.isNativeCodeLoaded

2015-08-03 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12280:
--
Priority: Minor  (was: Major)

 Skip unit tests based on maven profile rather than 
 NativeCodeLoader.isNativeCodeLoaded
 --

 Key: HADOOP-12280
 URL: https://issues.apache.org/jira/browse/HADOOP-12280
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor

 Some tests are skipped if native code is not loaded (i.e. 
 NativeCodeLoader.isNativeCodeLoaded() returns false). Skipping the test or 
 not should be judged from Maven profile rather than isNativeCodeLoaded() 
 because tests should fail if native libraries are misplaced by invalid 
 configuration in native profile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12280) Skip unit tests based on maven profile rather than NativeCodeLoader.isNativeCodeLoaded

2015-08-03 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12280:
--
Status: Patch Available  (was: Open)

 Skip unit tests based on maven profile rather than 
 NativeCodeLoader.isNativeCodeLoaded
 --

 Key: HADOOP-12280
 URL: https://issues.apache.org/jira/browse/HADOOP-12280
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12280.001.patch


 Some tests are skipped if native code is not loaded (i.e. 
 NativeCodeLoader.isNativeCodeLoaded() returns false). Skipping the test or 
 not should be judged from Maven profile rather than isNativeCodeLoaded() 
 because tests should fail if native libraries are misplaced by invalid 
 configuration in native profile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12280) Skip unit tests based on maven profile rather than NativeCodeLoader.isNativeCodeLoaded

2015-08-03 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12280?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-12280:
--
Attachment: HADOOP-12280.001.patch

 Skip unit tests based on maven profile rather than 
 NativeCodeLoader.isNativeCodeLoaded
 --

 Key: HADOOP-12280
 URL: https://issues.apache.org/jira/browse/HADOOP-12280
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-12280.001.patch


 Some tests are skipped if native code is not loaded (i.e. 
 NativeCodeLoader.isNativeCodeLoaded() returns false). Skipping the test or 
 not should be judged from Maven profile rather than isNativeCodeLoaded() 
 because tests should fail if native libraries are misplaced by invalid 
 configuration in native profile.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12287) add support for perlcritic

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12287:
--
Summary: add support for perlcritic  (was: add support for perl)

 add support for perlcritic
 --

 Key: HADOOP-12287
 URL: https://issues.apache.org/jira/browse/HADOOP-12287
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Attachments: HADOOP-12287.HADOOP-12111.00.patch, 
 HADOOP-12287.HADOOP-12111.01.patch


 To increase our language coverage, we should add Perl::Critic support or 
 maybe use Perl::Lint.  It might be faster to use -Mstrict -Mdiagnostics -cw 
 to at least get something basic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12121) smarter branch detection

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12121:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

Thanks for the review!

Committing.

 smarter branch detection
 

 Key: HADOOP-12121
 URL: https://issues.apache.org/jira/browse/HADOOP-12121
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: HADOOP-12111

 Attachments: HADOOP-12121.HADOOP-12111.01.patch, 
 HADOOP-12121.HADOOP-12111.02.patch, HADOOP-12121.HADOOP-12111.patch


 We should make branch detection smarter so that it works on micro versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12247:
--
Priority: Blocker  (was: Major)

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12248) Add native support for TAP

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12248:
--
Attachment: HADOOP-12248.HADOOP-12111.00.patch

-00:
* initial TAP implementation.  (This includes the patch in HADOOP-12247.)
* give hadoop a custom file_tests
* modify built-in file_tests to be less hadoop-specific
* modify the necessary bits for hadoop's bash unit tests to trigger 
appropriately


 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-08-03 Thread Aaron Dossett (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Dossett updated HADOOP-12284:
---
Attachment: HADOOP-12284.example

 UserGroupInformation doAs can throw misleading exception
 

 Key: HADOOP-12284
 URL: https://issues.apache.org/jira/browse/HADOOP-12284
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Aaron Dossett
Assignee: Aaron Dossett
Priority: Trivial
 Attachments: HADOOP-12284.example, HADOOP-12284.patch


 If doAs() catches a PrivilegedActionException it extracts the underlying 
 cause through getCause and then rethrows an exception based on the class of 
 the Cause.  If getCause returns null this executes, this is how it rethrown:
 else {
 throw new UndeclaredThrowableException(cause);
   }
 If cause == null that seems misleading. I have seen actual instances where 
 cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-08-03 Thread Aaron Dossett (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Dossett updated HADOOP-12284:
---
Description: 
If doAs() catches a PrivilegedActionException it extracts the underlying cause 
through getCause and then re-throws an exception based on the class of the 
Cause.  If getCause returns null, this is how it gets re-thrown:

else {
throw new UndeclaredThrowableException(cause);
  }

If cause == null that seems misleading. I have seen actual instances where 
cause is null, so this isn't just a theoretical concern.

  was:
If doAs() catches a PrivilegedActionException it extracts the underlying cause 
through getCause and then rethrows an exception based on the class of the 
Cause.  If getCause returns null this executes, this is how it rethrown:

else {
throw new UndeclaredThrowableException(cause);
  }

If cause == null that seems misleading. I have seen actual instances where 
cause is null, so this isn't just a theoretical concern.


 UserGroupInformation doAs can throw misleading exception
 

 Key: HADOOP-12284
 URL: https://issues.apache.org/jira/browse/HADOOP-12284
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Aaron Dossett
Assignee: Aaron Dossett
Priority: Trivial
 Attachments: HADOOP-12284.example, HADOOP-12284.patch


 If doAs() catches a PrivilegedActionException it extracts the underlying 
 cause through getCause and then re-throws an exception based on the class of 
 the Cause.  If getCause returns null, this is how it gets re-thrown:
 else {
 throw new UndeclaredThrowableException(cause);
   }
 If cause == null that seems misleading. I have seen actual instances where 
 cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12129) rework test-patch bug system support

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12129:
--
Priority: Blocker  (was: Major)

 rework test-patch bug system support
 

 Key: HADOOP-12129
 URL: https://issues.apache.org/jira/browse/HADOOP-12129
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Priority: Blocker

 WARNING: this is a fairly big project.
 See first comment for a brain dump on the issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-08-03 Thread Aaron Dossett (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652269#comment-14652269
 ] 

Aaron Dossett commented on HADOOP-12284:


Attached is a real example.  This happened when trying to write to Hive from 
Storm with the metastore down.  See:  HADOOP-12284.example

 UserGroupInformation doAs can throw misleading exception
 

 Key: HADOOP-12284
 URL: https://issues.apache.org/jira/browse/HADOOP-12284
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Aaron Dossett
Assignee: Aaron Dossett
Priority: Trivial
 Attachments: HADOOP-12284.patch


 If doAs() catches a PrivilegedActionException it extracts the underlying 
 cause through getCause and then rethrows an exception based on the class of 
 the Cause.  If getCause returns null this executes, this is how it rethrown:
 else {
 throw new UndeclaredThrowableException(cause);
   }
 If cause == null that seems misleading. I have seen actual instances where 
 cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12256) add support for ruby-lint

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12256:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing.

Thanks!

 add support for ruby-lint
 -

 Key: HADOOP-12256
 URL: https://issues.apache.org/jira/browse/HADOOP-12256
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12256.HADOOP-12111.00.patch, 
 HADOOP-12256.HADOOP-12111.01.patch


 We should add support for ruby-lint.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652099#comment-14652099
 ] 

Hadoop QA commented on HADOOP-12295:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 59s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 43s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 44s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   2m 26s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 21s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   4m 21s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  22m 29s | Tests passed in 
hadoop-common. |
| {color:green}+1{color} | hdfs tests | 162m 23s | Tests passed in hadoop-hdfs. 
|
| | | 230m 26s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748448/HADOOP-12295.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 90b5104 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7400/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-hdfs test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7400/artifact/patchprocess/testrun_hadoop-hdfs.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7400/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7400/console |


This message was automatically generated.

 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-12295.001.patch


 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list. Then it is more 
 efficient since in most cases deleting parent node doesn't happen.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12287) add support for perlcritic

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12287:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing.

changing summary to specifically say perlcritic.

Thanks!

 add support for perlcritic
 --

 Key: HADOOP-12287
 URL: https://issues.apache.org/jira/browse/HADOOP-12287
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12287.HADOOP-12111.00.patch, 
 HADOOP-12287.HADOOP-12111.01.patch


 To increase our language coverage, we should add Perl::Critic support or 
 maybe use Perl::Lint.  It might be faster to use -Mstrict -Mdiagnostics -cw 
 to at least get something basic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12286:
--
Affects Version/s: HADOOP-12111

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12286) test-patch pylint plugin should support indent-string option

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12286:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing

thanks!

 test-patch pylint plugin should support indent-string option
 

 Key: HADOOP-12286
 URL: https://issues.apache.org/jira/browse/HADOOP-12286
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Kengo Seki
Assignee: Kengo Seki
 Fix For: HADOOP-12111

 Attachments: HADOOP-12286.HADOOP-12111.00.patch


 By default, pylint uses 4 spaces indentation. But each project has the 
 different indentation policy.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12247:
--
Attachment: HADOOP-12247.HADOOP-12111.01.patch

-01:
* rebase

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652210#comment-14652210
 ] 

Allen Wittenauer commented on HADOOP-12248:
---

Adding HADOOP-12247 as a blocker, since that moves junit out of the main body 
of code and adds the ability to specify multiple test formats.

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer

 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

Description: 
In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get the 
parent node, no need to loop the {{children}} list.
Another nit in current code is:
{code}
  String parent = n.getNetworkLocation();
  String currentPath = getPath(this);
{code}
can be in closure of {{\!isAncestor\(n\)}}

  was:In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to 
get the parent node, no need to loop the {{children}} list.


 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu

 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

Description: 
In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get the 
parent node, no need to loop the {{children}} list. Then it is more efficient 
since in most cases deleting parent node doesn't happen.
Another nit in current code is:
{code}
  String parent = n.getNetworkLocation();
  String currentPath = getPath(this);
{code}
can be in closure of {{\!isAncestor\(n\)}}

  was:
In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get the 
parent node, no need to loop the {{children}} list.
Another nit in current code is:
{code}
  String parent = n.getNetworkLocation();
  String currentPath = getPath(this);
{code}
can be in closure of {{\!isAncestor\(n\)}}


 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-12295.001.patch


 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list. Then it is more 
 efficient since in most cases deleting parent node doesn't happen.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

Description: In {{NetworkTopology#InnerNode#remove}}, We can use 
{{childrenMap}} to get the parent node, no need to loop the {{children}} list.  
(was: In {{ NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to 
get the parent node, no need to loop the {{children}} list.)

 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu

 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)
Yi Liu created HADOOP-12295:
---

 Summary: Improve NetworkTopology#InnerNode#remove logic
 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu


In {{ NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get the 
parent node, no need to loop the {{children}} list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

Attachment: HADOOP-12295.001.patch

 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-12295.001.patch


 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12295) Improve NetworkTopology#InnerNode#remove logic

2015-08-03 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-12295:

Status: Patch Available  (was: Open)

 Improve NetworkTopology#InnerNode#remove logic
 --

 Key: HADOOP-12295
 URL: https://issues.apache.org/jira/browse/HADOOP-12295
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-12295.001.patch


 In {{NetworkTopology#InnerNode#remove}}, We can use {{childrenMap}} to get 
 the parent node, no need to loop the {{children}} list. Then it is more 
 efficient since in most cases deleting parent node doesn't happen.
 Another nit in current code is:
 {code}
   String parent = n.getNetworkLocation();
   String currentPath = getPath(this);
 {code}
 can be in closure of {{\!isAncestor\(n\)}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12248) Add native support for TAP

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12248:
--
Status: Patch Available  (was: Open)

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11812) Implement listLocatedStatus for ViewFileSystem to speed up split calculation

2015-08-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11812:
-
Labels: 2.6.1-candidate 2.7.2-candidate performance  (was: performance)

 Implement listLocatedStatus for ViewFileSystem to speed up split calculation
 

 Key: HADOOP-11812
 URL: https://issues.apache.org/jira/browse/HADOOP-11812
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.7.0
Reporter: Gera Shegalov
Assignee: Gera Shegalov
Priority: Blocker
  Labels: 2.6.1-candidate, 2.7.2-candidate, performance
 Fix For: 2.8.0

 Attachments: HADOOP-11812.001.patch, HADOOP-11812.002.patch, 
 HADOOP-11812.003.patch, HADOOP-11812.004.patch, HADOOP-11812.005.patch


 ViewFileSystem is currently not taking advantage of MAPREDUCE-1981. This 
 causes several x of RPC overhead and added latency.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12247:
--
Status: Patch Available  (was: Open)

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652370#comment-14652370
 ] 

Hadoop QA commented on HADOOP-12248:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 11s 
{color} | {color:red} The applied patch generated 3 new shellcheck issues 
(total was 22, now 25). {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 29s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748514/HADOOP-12248.HADOOP-12111.00.patch
 |
| git revision | HADOOP-12111 / 9a3596a |
| Optional Tests | asflicense unit shellcheck |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7403/artifact/patchprocess/diff-patch-shellcheck.txt
 |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7403/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7403/console |


This message was automatically generated.

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12248) Add native support for TAP

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652367#comment-14652367
 ] 

Hadoop QA commented on HADOOP-12248:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7403/console in case of 
problems.

 Add native support for TAP
 --

 Key: HADOOP-12248
 URL: https://issues.apache.org/jira/browse/HADOOP-12248
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-12248.HADOOP-12111.00.patch


 test-patch should support TAP-output files similarly to how we support JUnit 
 XML files.  This is an enabler for bats support for our own unit testing!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Chang Li (JIRA)
Chang Li created HADOOP-12296:
-

 Summary: when setnetgrent returns 0 in linux, exception should be 
thrown
 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li


In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
as out of memory, unknown group, unavailable service, etc. So errorMessage 
should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HADOOP-12296:
--
Attachment: HADOOP-12296.patch

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652480#comment-14652480
 ] 

Allen Wittenauer commented on HADOOP-12296:
---

This code change covers more than Linux.  We need to verify it on 
Solaris/Illumos at a minimum.

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652372#comment-14652372
 ] 

Hadoop QA commented on HADOOP-12247:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7404/console in case of 
problems.

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12247) Convert 'unit' to 'junit'

2015-08-03 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14652373#comment-14652373
 ] 

Hadoop QA commented on HADOOP-12247:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:blue}0{color} | {color:blue} @author {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipping @author checks as test-patch.sh has been 
patched. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 27s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12748507/HADOOP-12247.HADOOP-12111.01.patch
 |
| git revision | HADOOP-12111 / 9a3596a |
| Optional Tests | asflicense shellcheck |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7404/console |


This message was automatically generated.

 Convert 'unit' to 'junit'
 -

 Key: HADOOP-12247
 URL: https://issues.apache.org/jira/browse/HADOOP-12247
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: yetus
Affects Versions: HADOOP-12111
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-12247.HADOOP-12111.00.patch, 
 HADOOP-12247.HADOOP-12111.01.patch


 In order to support other unit test systems, we should convert 'unit' to be 
 specifically 'junit'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11295) RPC Server Reader thread can't shutdown if RPCCallQueue is full

2015-08-03 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11295:
-
Labels: 2.6.1-candidate  (was: )

 RPC Server Reader thread can't shutdown if RPCCallQueue is full
 ---

 Key: HADOOP-11295
 URL: https://issues.apache.org/jira/browse/HADOOP-11295
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma
Assignee: Ming Ma
  Labels: 2.6.1-candidate
 Fix For: 2.7.0

 Attachments: HADOOP-11295-2.patch, HADOOP-11295-3.patch, 
 HADOOP-11295-4.patch, HADOOP-11295-5.patch, HADOOP-11295.006.patch, 
 HADOOP-11295.patch


 If RPC server is asked to stop when RPCCallQueue is full, {{reader.join()}} 
 will just wait there. That is because
 1. The reader thread is blocked on {{callQueue.put(call);}}.
 2. When RPC server is asked to stop, it will interrupt all handler threads 
 and thus no threads will drain the callQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12296) when setnetgrent returns 0 in linux, exception should be thrown

2015-08-03 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12296?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HADOOP-12296:
--
Status: Patch Available  (was: Open)

[~jlowe] please help review, thanks!

 when setnetgrent returns 0 in linux, exception should be thrown
 ---

 Key: HADOOP-12296
 URL: https://issues.apache.org/jira/browse/HADOOP-12296
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HADOOP-12296.patch


 In linux, setnetgrent returns 0 in linux when something wrong is happen, such 
 as out of memory, unknown group, unavailable service, etc. So errorMessage 
 should be set and exception should be thrown



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)