[jira] [Commented] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724624#comment-14724624
 ] 

Hadoop QA commented on HADOOP-12367:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 23s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 54s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 57s | There were no new javadoc 
warning messages. |
| {color:red}-1{color} | release audit |   0m 16s | The applied patch generated 
1 release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 31s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:red}-1{color} | common tests |  22m 12s | Tests failed in 
hadoop-common. |
| | |  57m 52s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.io.compress.TestCodec |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753433/HADOOP-12367.001.patch 
|
| Optional Tests | javadoc javac unit |
| git revision | trunk / 7ad3556 |
| Release Audit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7566/artifact/patchprocess/patchReleaseAuditProblems.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7566/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7566/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7566/console |


This message was automatically generated.

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12336) github_jira_bridge doesn't work

2015-08-31 Thread Kengo Seki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kengo Seki updated HADOOP-12336:

Attachment: HADOOP-12336.HADOOP-12111.01.patch

Thanks! Attaching a revised patch.
I confirmed it works using HADOOP-11820:

{code}
[sekikn@mobile hadoop]$ dev-support/test-patch.sh --basedir=../dev/hadoop 
--project=hadoop HADOOP-11820

(snip)

HADOOP-11820 appears to be a Github PR. Switching Modes.
GITHUB PR #2 is being downloaded at Tue Sep  1 01:00:41 JST 2015 from
https://github.com/aw-altiscale/hadoop/pull/2
Patch from GITHUB PR #2 is being downloaded at Tue Sep  1 01:00:42 JST 2015 from
https://github.com/aw-altiscale/hadoop/pull/2.patch

(snip)

-1 overall

 _ _ __ 
|  ___|_ _(_) |_   _ _ __ ___| |
| |_ / _` | | | | | | '__/ _ \ |
|  _| (_| | | | |_| | | |  __/_|
|_|  \__,_|_|_|\__,_|_|  \___(_)



| Vote |  Subsystem |  Runtime   | Comment

|  +1  |  site  |  0m 00s| HADOOP-12111 passed 
|   0  |   @author  |  0m 00s| Skipping @author checks as test-patch.sh 
|  ||| has been patched.
|  -1  |test4tests  |  0m 00s| The patch doesn't appear to include any 
|  ||| new or modified tests. Please justify why
|  ||| no new tests are needed for this patch.
|  ||| Also please list what manual steps were
|  ||| performed to verify this patch.
|  +1  |  site  |  0m 00s| the patch passed 
|  +1  |asflicense  |  0m 26s| Patch does not generate ASF License 
|  ||| warnings.
|  -1  |shellcheck  |  0m 12s| The applied patch generated 3 new 
|  ||| shellcheck issues (total was 34, now 37).
|  +1  |whitespace  |  0m 00s| Patch has no whitespace issues. 
|  ||  1m 04s| 


|| Subsystem || Report/Notes ||

| JIRA Issue | HADOOP-11820 |
| GITHUB PR | https://github.com/aw-altiscale/hadoop/pull/2 |
| git revision | HADOOP-12111 / b006c9a |
| Optional Tests | asflicense site unit shellcheck |
| uname | Darwin mobile.local 14.5.0 Darwin Kernel Version 14.5.0: Wed Jul 29 
02:26:53 PDT 2015; root:xnu-2782.40.9~1/RELEASE_X86_64 x86_64 |
| Build tool | maven |
| Personality | /Users/sekikn/hadoop/dev-support/personality/hadoop.sh |
| Default Java | 1.7.0_80 |
| shellcheck | v0.3.6 |
| shellcheck | /private/tmp/test-patch-hadoop/29857/diff-patch-shellcheck.txt |
| Max memory used | 46MB |
{code}

> github_jira_bridge doesn't work
> ---
>
> Key: HADOOP-12336
> URL: https://issues.apache.org/jira/browse/HADOOP-12336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12336.HADOOP-12111.00.patch, 
> HADOOP-12336.HADOOP-12111.01.patch
>
>
> The github_jira_bridge (which allows the JIRA bugsystem to switch to github 
> mode) is failing. See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12336) github_jira_bridge doesn't work

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723658#comment-14723658
 ] 

Hadoop QA commented on HADOOP-12336:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7561/console in case of 
problems.

> github_jira_bridge doesn't work
> ---
>
> Key: HADOOP-12336
> URL: https://issues.apache.org/jira/browse/HADOOP-12336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12336.HADOOP-12111.00.patch, 
> HADOOP-12336.HADOOP-12111.01.patch
>
>
> The github_jira_bridge (which allows the JIRA bugsystem to switch to github 
> mode) is failing. See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12336) github_jira_bridge doesn't work

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723660#comment-14723660
 ] 

Hadoop QA commented on HADOOP-12336:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
14s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 26s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12753315/HADOOP-12336.HADOOP-12111.01.patch
 |
| JIRA Issue | HADOOP-12336 |
| git revision | HADOOP-12111 / b006c9a |
| Optional Tests | asflicense unit shellcheck |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7561/testReport/ |
| Max memory used | 49MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7561/console |


This message was automatically generated.



> github_jira_bridge doesn't work
> ---
>
> Key: HADOOP-12336
> URL: https://issues.apache.org/jira/browse/HADOOP-12336
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12336.HADOOP-12111.00.patch, 
> HADOOP-12336.HADOOP-12111.01.patch
>
>
> The github_jira_bridge (which allows the JIRA bugsystem to switch to github 
> mode) is failing. See comments.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12358) FSShell should prompt before deleting directories bigger than a configured size

2015-08-31 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12358?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723767#comment-14723767
 ] 

Xiaoyu Yao commented on HADOOP-12358:
-

Thanks all for the review and feedbacks. Update to patch v6 with the following 
summary of changes based on the feedback:
1. add "-safely" option to -rm command as [~aw] suggested.
2. Reduce the configuration complexity as [~arpitagarwal] suggested. 
Only one key "hadoop.shell.delete.limit.num.files" is used.  
3. Document 'hadoop.shell.delete.limit.num.files' in core-default.xml.
4. Make this feature optional and off by default to avoid breaking any existing 
automations. It is enabled only if all the three criteria are met:
* Trash is not enabled or unable to protect the directory to be deleted
* and -safely is used in the rm command
* and hadoop.shell.delete.limit.num.files > 0
This way, the admin can choose if they think the feature is useful for certain 
use cases. Especially with HADOOP-11353, admin can alias 'hadoop -rm' to 
'hadoop -rm -safely' in .hadooprc like 'rm' to 'rm -i' in Linux deployments 
when necessary. 

[~arpitagarwal]: Given HDFS-4995 and HDFS-8046 have improved the NN locking 
issue of getContentSummary, is it OK to investigate  for performance 
improvement in a separate JIRA?

> FSShell should prompt before deleting directories bigger than a configured 
> size
> ---
>
> Key: HADOOP-12358
> URL: https://issues.apache.org/jira/browse/HADOOP-12358
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
> Attachments: HADOOP-12358.00.patch, HADOOP-12358.01.patch, 
> HADOOP-12358.02.patch, HADOOP-12358.03.patch, HADOOP-12358.04.patch, 
> HADOOP-12358.05.patch, HADOOP-12358.06.patch
>
>
> We have seen many cases with customers deleting data inadvertently with 
> -skipTrash. The FSShell should prompt user if the size of the data or the 
> number of files being deleted is bigger than a threshold even though 
> -skipTrash is being used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12334:
---
Attachment: HADOOP-12334.04.patch

Fixed some persistent whitelist errors

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723908#comment-14723908
 ] 

Hadoop QA commented on HADOOP-12334:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:red}-1{color} | javac |   8m 28s | The patch appears to cause the 
build to fail. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753351/HADOOP-12334.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / caa04de |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7562/console |


This message was automatically generated.

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12344) validateSocketPathSecurity0 message could be better

2015-08-31 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723958#comment-14723958
 ] 

Colin Patrick McCabe commented on HADOOP-12344:
---

bq. ..the secret of having tests match is to make some of the text (e.g. the 
wiki link) a string constant in the source, with the test using a .contains() 
probe for it. That way, changes in the text are automatically picked up in the 
test
In this case, the string is being generated in the C source and checked in the 
Java tests, so it would be difficult to have a constant shared between the two.

The new text looks fine to me, but the code needs work.

{code}
-  for (check[0] = '/', check[1] = '\0', rest = path, token = "";
+  for (check[0] = '/', check[1] = '\0', rest=strdup(path), rest_free=rest, 
token = "";
{code}
This isn't correctly checking for {{strdup}} returning NULL.

{code}
+  jthr = newIOException(env, "failed to stat a path component: '%s' in 
'%s'.  "
+  "error code %d (%s).  Ensure that the path is configured 
correctly.", check, path, ret, terror(ret));
{code}
More than 80 columns

{code}
+  perm_msg=(char *)malloc(PATH_MAX+200);
{code}
You don't need a typecast here.  You do need to check for NULL if you use 
dynamic allocation (why are you using dynamic allocation anyway?)

{code}
+  snprintf(perm_msg,PATH_MAX+200,"is world-writable. This might help: 
'chmod o-w %s'. ",check);
{code}
PATH_MAX+200 is really gross.  Why don't you just use newIOException, it 
handles all this string manipulation and dynamic allocation for you?

Please fix this stuff before you commit.

> validateSocketPathSecurity0 message could be better
> ---
>
> Key: HADOOP-12344
> URL: https://issues.apache.org/jira/browse/HADOOP-12344
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Casey Brotherton
>Assignee: Casey Brotherton
>Priority: Trivial
> Attachments: HADOOP-12344.001.patch, HADOOP-12344.002.patch, 
> HADOOP-12344.patch
>
>
> When a socket path does not have the correct permissions, an error is thrown.
> That error just has the failing component of the path and not the entire path 
> of the socket.
> The entire path of the socket could be printed out to allow for a direct 
> check of the permissions of the entire path.
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path.
>   at 
> org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native 
> Method)
>   at 
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
> ...
> {code}
> The error message could also provide the socket path:
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path than 
> '/var/run/hdfs-sockets/dn'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12334:
---
Attachment: HADOOP-12334.04.patch

Fix some whitespace issues

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12334:
---
Attachment: (was: HADOOP-12334.04.patch)

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724076#comment-14724076
 ] 

Hadoop QA commented on HADOOP-12334:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m  7s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 59s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 59s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 23s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 5  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 27s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 35s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 49s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   1m 12s | Tests passed in 
hadoop-azure. |
| | |  38m 59s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753372/HADOOP-12334.04.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / caa04de |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7563/artifact/patchprocess/whitespace.txt
 |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7563/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7563/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7563/console |


This message was automatically generated.

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12334:
---
Attachment: HADOOP-12334.05.patch

Removing yet more (and hopefully last few) whitespace errors

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch, HADOOP-12334.05.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12365) plugin for password, key, etc detection

2015-08-31 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12365:
-

 Summary: plugin for password, key, etc detection
 Key: HADOOP-12365
 URL: https://issues.apache.org/jira/browse/HADOOP-12365
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Allen Wittenauer


It'd be nice to have a plugin that checks for these security vulnerabilities, 
esp for non-Apache source.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11350) The size of header buffer of HttpServer is too small when HTTPS is enabled

2015-08-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11350?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11350:
-
Fix Version/s: 2.6.1

Pulled this into 2.6.1 after [~ajisakaa] verified that the patch applies 
cleanly. Ran compilation and TestHttpServer,TestSSLHttpServer before the push.


> The size of header buffer of HttpServer is too small when HTTPS is enabled
> --
>
> Key: HADOOP-11350
> URL: https://issues.apache.org/jira/browse/HADOOP-11350
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-11350.patch, HADOOP-11350.patch, 
> HADOOP-11350.patch, HADOOP-11350.patch
>
>
> {code}
> curl -k  -vvv -i -L --negotiate -u : https://:50070
> < HTTP/1.1 413 FULL head
> HTTP/1.1 413 FULL head
> < Connection: close
> Connection: close
> < Server: Jetty(6.1.26)
> Server: Jetty(6.1.26)
> {code}
> For some users, the spnego token too large for the default header buffer used 
> by Jetty. 
> Though the issue is fixed for HTTP connections (via HADOOP-8816), HTTPS 
> connections needs to be fixed as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11482) Use correct UGI when KMSClientProvider is called by a proxy user

2015-08-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11482:
-
Fix Version/s: 2.6.1

Pulled this into 2.6.1 after [~ajisakaa] verified that the patch applies 
cleanly. Ran compilation and TestKMS before the push.


> Use correct UGI when KMSClientProvider is called by a proxy user
> 
>
> Key: HADOOP-11482
> URL: https://issues.apache.org/jira/browse/HADOOP-11482
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Arun Suresh
>Assignee: Arun Suresh
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-11482.1.patch, HADOOP-11482.2.patch
>
>
> Long Living clients of HDFS (For eg. OOZIE) use cached DFSClients which in 
> turn use a cached KMSClientProvider to talk to KMS.
> Before an MR Job is run, the job client calls the 
> {{DFClient.addDelegationTokens()}} method which calls 
> {{addDelegationTokens()}} on the {{KMSClientProvider}} to get any delegation 
> token associated to the user.
> Unfortunately, this call uses a cached 
> {{DelegationTokenAuthenticationURL.Token}} instance which can cause the 
> {{SignerSecretProvider}} implementation of the {{AuthenticationFilter}} at 
> the KMS Server end to fail validation. Which results in the MR job itself 
> failing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12168) Clean undeclared used dependencies

2015-08-31 Thread Gabor Liptak (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Liptak updated HADOOP-12168:
--
Attachment: HADOOP-12168.6.patch

> Clean undeclared used dependencies
> --
>
> Key: HADOOP-12168
> URL: https://issues.apache.org/jira/browse/HADOOP-12168
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 3.0.0
>Reporter: Gabor Liptak
>Assignee: Gabor Liptak
> Attachments: HADOOP-12168.1.patch, HADOOP-12168.2.patch, 
> HADOOP-12168.3.patch, HADOOP-12168.4.patch, HADOOP-12168.5.patch, 
> HADOOP-12168.6.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12366) expose calculated paths

2015-08-31 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer reassigned HADOOP-12366:
-

Assignee: Allen Wittenauer

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12366) expose calculated paths

2015-08-31 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12366:
-

 Summary: expose calculated paths
 Key: HADOOP-12366
 URL: https://issues.apache.org/jira/browse/HADOOP-12366
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Allen Wittenauer


It would be useful for 3rd party apps to know the locations of things when 
hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12366) expose calculated paths

2015-08-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724330#comment-14724330
 ] 

Allen Wittenauer commented on HADOOP-12366:
---

Blocking this on HADOOP-10787 since HADOOP_TOOL_PATH should be cleaned up first.

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12366) expose calculated paths

2015-08-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724330#comment-14724330
 ] 

Allen Wittenauer edited comment on HADOOP-12366 at 8/31/15 10:52 PM:
-

Blocking this on HADOOP-10787 since HADOOP_TOOLS_PATH should be cleaned up 
first.


was (Author: aw):
Blocking this on HADOOP-10787 since HADOOP_TOOL_PATH should be cleaned up first.

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12364) Deleting pid file after stop is causing the daemons to keep restarting

2015-08-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724344#comment-14724344
 ] 

Allen Wittenauer commented on HADOOP-12364:
---

A bug, but it's worth pointing out that this also sounds like a 
misconfiguration of HADOOP_SLEEP_TIMEOUT for your installation.  Whatever is 
restarting it should *also* wait HADOOP_SLEEP_TIMEOUT before doing that.

> Deleting pid file after stop is causing the daemons to keep restarting 
> ---
>
> Key: HADOOP-12364
> URL: https://issues.apache.org/jira/browse/HADOOP-12364
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Siqi Li
>Priority: Blocker
> Attachments: HADOOP-12364.v1.patch
>
>
> pid files are deleting in 5 seconds after we stop the daemons. If a start 
> command were executed within the 5 seconds, the pid file will be overwrite 
> with a new pid. However, this pid file is going to be deleted by the former 
> stop command. This is causing the monitoring service to lose track of the 
> daemons, hence keep rebooting them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12364) Deleting pid file after stop is causing the daemons to keep restarting

2015-08-31 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724344#comment-14724344
 ] 

Allen Wittenauer edited comment on HADOOP-12364 at 8/31/15 10:59 PM:
-

A bug, but it's worth pointing out that this also sounds like a 
misconfiguration of HADOOP_STOP_TIMEOUT for your installation.  Whatever is 
restarting it should *also* wait HADOOP_STOP_TIMEOUT before doing that.


was (Author: aw):
A bug, but it's worth pointing out that this also sounds like a 
misconfiguration of HADOOP_SLEEP_TIMEOUT for your installation.  Whatever is 
restarting it should *also* wait HADOOP_SLEEP_TIMEOUT before doing that.

> Deleting pid file after stop is causing the daemons to keep restarting 
> ---
>
> Key: HADOOP-12364
> URL: https://issues.apache.org/jira/browse/HADOOP-12364
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Siqi Li
>Priority: Blocker
> Attachments: HADOOP-12364.v1.patch
>
>
> pid files are deleting in 5 seconds after we stop the daemons. If a start 
> command were executed within the 5 seconds, the pid file will be overwrite 
> with a new pid. However, this pid file is going to be deleted by the former 
> stop command. This is causing the monitoring service to lose track of the 
> daemons, hence keep rebooting them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11506) Configuration variable expansion regex expensive for long values

2015-08-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11506:
-
Fix Version/s: 2.6.1

Pulled this into 2.6.1 after [~ajisakaa] verified that the patch applies 
cleanly. (The 2.7 patch itself departed a bit from trunk). Ran compilation and 
TestConfiguration before the push.

> Configuration variable expansion regex expensive for long values
> 
>
> Key: HADOOP-11506
> URL: https://issues.apache.org/jira/browse/HADOOP-11506
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: Dmitriy V. Ryaboy
>Assignee: Gera Shegalov
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-11506.001.patch, HADOOP-11506.002.patch, 
> HADOOP-11506.003.patch, HADOOP-11506.004.patch
>
>
> Profiling several large Hadoop jobs, we discovered that a surprising amount 
> of time was spent inside Configuration.get, more specifically, in regex 
> matching caused by the substituteVars call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12334) Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage Throttling after retries

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724397#comment-14724397
 ] 

Hadoop QA commented on HADOOP-12334:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 58s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |  10m  5s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  11m 51s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   0m 24s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 39s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 39s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 56s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   1m 24s | Tests passed in 
hadoop-azure. |
| | |  45m 24s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753389/HADOOP-12334.05.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 826ae1c |
| hadoop-azure test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7564/artifact/patchprocess/testrun_hadoop-azure.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7564/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7564/console |


This message was automatically generated.

> Change Mode Of Copy Operation of HBase WAL Archiving to bypass Azure Storage 
> Throttling after retries
> -
>
> Key: HADOOP-12334
> URL: https://issues.apache.org/jira/browse/HADOOP-12334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
> Attachments: HADOOP-12334.01.patch, HADOOP-12334.02.patch, 
> HADOOP-12334.03.patch, HADOOP-12334.04.patch, HADOOP-12334.05.patch
>
>
> HADOOP-11693 mitigated the problem of HMaster aborting regionserver due to 
> Azure Storage Throttling event during HBase WAL archival. The way this was 
> achieved was by applying an intensive exponential retry when throttling 
> occurred.
> As a second level of mitigation we will change the mode of copy operation if 
> the operation fails even after all retries -i.e. we will do a client side 
> copy of the blob and then copy it back to destination. This operation will 
> not be subject to throttling and hence should provide a stronger mitigation. 
> However it is more expensive, hence we do it only in the case we fail after 
> all retries



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11684) S3a to use thread pool that blocks clients

2015-08-31 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724435#comment-14724435
 ] 

Aaron Fabbri commented on HADOOP-11684:
---

[~Thomas Demoor] reading your patch.. Curious, did you consider just using 
[http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ThreadPoolExecutor.CallerRunsPolicy.html|CallerRunsPolicy]?
  

> S3a to use thread pool that blocks clients
> --
>
> Key: HADOOP-11684
> URL: https://issues.apache.org/jira/browse/HADOOP-11684
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
>Reporter: Thomas Demoor
>Assignee: Thomas Demoor
> Attachments: HADOOP-11684-001.patch, HADOOP-11684-002.patch
>
>
> Currently, if fs.s3a.max.total.tasks are queued and another (part)upload 
> wants to start, a RejectedExecutionException is thrown. 
> We should use a threadpool that blocks clients, nicely throtthling them, 
> rather than throwing an exception. F.i. something similar to 
> https://github.com/apache/incubator-s4/blob/master/subprojects/s4-comm/src/main/java/org/apache/s4/comm/staging/BlockingThreadPoolExecutorService.java



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2015-08-31 Thread Rashmi Vinayak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724449#comment-14724449
 ] 

Rashmi Vinayak commented on HADOOP-11828:
-

Hi [~jack_liuquan] and [~drankye],

Thank you both for the amazing effort on this feature implementation. I was 
wondering what we can do to not lose momentum. Perhaps assigning priorities to 
the major change suggestions could help?

Thanks!

> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HDFS-7715-hhxor-decoder.patch, 
> HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction. This JIRA aims to introduce Hitchhiker to the 
> HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12364) Deleting pid file after stop is causing the daemons to keep restarting

2015-08-31 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724470#comment-14724470
 ] 

Sangjin Lee commented on HADOOP-12364:
--

There are service management mechanisms (e.g. monit restart) that are somewhat 
haphazard in terms of the stop and start timing. We came across that issue.

I wouldn't quite call this bug a blocker (probably a minor bug), but it's 
definitely useful defensive fix. +1 for it.

> Deleting pid file after stop is causing the daemons to keep restarting 
> ---
>
> Key: HADOOP-12364
> URL: https://issues.apache.org/jira/browse/HADOOP-12364
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Siqi Li
>Priority: Blocker
> Attachments: HADOOP-12364.v1.patch
>
>
> pid files are deleting in 5 seconds after we stop the daemons. If a start 
> command were executed within the 5 seconds, the pid file will be overwrite 
> with a new pid. However, this pid file is going to be deleted by the former 
> stop command. This is causing the monitoring service to lose track of the 
> daemons, hence keep rebooting them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12364) Deleting pid file after stop is causing the daemons to keep restarting

2015-08-31 Thread Siqi Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12364?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Siqi Li updated HADOOP-12364:
-
Priority: Minor  (was: Blocker)

> Deleting pid file after stop is causing the daemons to keep restarting 
> ---
>
> Key: HADOOP-12364
> URL: https://issues.apache.org/jira/browse/HADOOP-12364
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Siqi Li
>Priority: Minor
> Attachments: HADOOP-12364.v1.patch
>
>
> pid files are deleting in 5 seconds after we stop the daemons. If a start 
> command were executed within the 5 seconds, the pid file will be overwrite 
> with a new pid. However, this pid file is going to be deleted by the former 
> stop command. This is causing the monitoring service to lose track of the 
> daemons, hence keep rebooting them



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-12367:


 Summary: Move TestFileUtil's test resources to resources folder
 Key: HADOOP-12367
 URL: https://issues.apache.org/jira/browse/HADOOP-12367
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


Little cleanup. Right now we do an antrun step to copy the tar and tgz from the 
source folder to target folder. We can skip this by just putting it in the 
resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12367:
-
Attachment: HADOOP-12367.001.patch

Patch attached, binary rename then removing the antrun step.

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12367:
-
Status: Patch Available  (was: Open)

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11295) RPC Server Reader thread can't shutdown if RPCCallQueue is full

2015-08-31 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11295:
-
Fix Version/s: 2.6.1

Pulled this into 2.6.1 following [~ajisakaa]'s patch. Ran compilation and 
TestRPC before the push.


> RPC Server Reader thread can't shutdown if RPCCallQueue is full
> ---
>
> Key: HADOOP-11295
> URL: https://issues.apache.org/jira/browse/HADOOP-11295
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Ming Ma
>Assignee: Ming Ma
>  Labels: 2.6.1-candidate
> Fix For: 2.6.1, 2.7.0
>
> Attachments: HADOOP-11295-2.patch, HADOOP-11295-3.patch, 
> HADOOP-11295-4.patch, HADOOP-11295-5.patch, HADOOP-11295.006.patch, 
> HADOOP-11295.branch-2.6.patch, HADOOP-11295.patch
>
>
> If RPC server is asked to stop when RPCCallQueue is full, {{reader.join()}} 
> will just wait there. That is because
> 1. The reader thread is blocked on {{callQueue.put(call);}}.
> 2. When RPC server is asked to stop, it will interrupt all handler threads 
> and thus no threads will drain the callQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang moved HDFS-8997 to HADOOP-12368:


Affects Version/s: (was: 2.7.1)
   2.7.1
 Target Version/s: 2.8.0  (was: 2.8.0)
  Key: HADOOP-12368  (was: HDFS-8997)
  Project: Hadoop Common  (was: Hadoop HDFS)

> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12368:
-
   Resolution: Fixed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Thanks Yi for reviewing, committed to trunk and branch-2.

> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724540#comment-14724540
 ] 

Yi Liu commented on HADOOP-12367:
-

+1 pending Jenkins.  Thanks for the cleanup.

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724578#comment-14724578
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8376 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8376/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12118) Validate xml configuration files with XML Schema

2015-08-31 Thread Kengo Seki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724763#comment-14724763
 ] 

Kengo Seki commented on HADOOP-12118:
-

Sorry [~gliptak] for the late response. I agree with the backport because it 
must be useful for the 2.x users. But I couldn't decide whether we should adopt 
xsd for validation or not.
One possibility is, using xsd for the basic structure validation and xpath (for 
example) for the advanced validations I mentioned above to avoid direct xml 
walking, like:

{code}
XPath xpath = XPathFactory.newInstance().newXPath();
NodeList nodes = (NodeList) 
xpath.evaluate("/configuration/property/name/text()",
new InputSource("core-site.xml"), XPathConstants.NODESET);
Set s = new HashSet();
for (int i=0; i Validate xml configuration files with XML Schema
> 
>
> Key: HADOOP-12118
> URL: https://issues.apache.org/jira/browse/HADOOP-12118
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Christopher Tubbs
> Attachments: HADOOP-7947.branch-2.1.patch, hadoop-configuration.xsd
>
>
> I spent an embarrassingly long time today trying to figure out why the 
> following wouldn't work.
> {code}
> 
>   fs.defaultFS
>   hdfs://localhost:9000
> 
> {code}
> I just kept getting an error about no authority for {{fs.defaultFS}}, with a 
> value of {{file:///}}, which made no sense... because I knew it was there.
> The problem was that the {{core-site.xml}} was parsed entirely without any 
> validation. This seems incorrect. The very least that could be done is a 
> simple XML Schema validation against an XSD, before parsing. That way, users 
> will get immediate failures on common typos and other problems in the xml 
> configuration files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724765#comment-14724765
 ] 

Hadoop QA commented on HADOOP-10420:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m 45s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 40s | There were no new javac warning 
messages. |
| {color:red}-1{color} | javadoc |   9m 53s | The applied patch generated  4  
additional warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 20s | The applied patch generated  7 
new checkstyle issues (total was 276, now 283). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 48s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-openstack. |
| | |  37m  8s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753450/HADOOP-10420-009.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 7ad3556 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7567/artifact/patchprocess/diffJavadocWarnings.txt
 |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7567/artifact/patchprocess/diffcheckstylehadoop-openstack.txt
 |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7567/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7567/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7567/console |


This message was automatically generated.

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-08-31 Thread Jim VanOosten (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim VanOosten updated HADOOP-10420:
---
Attachment: HADOOP-10420-009.patch

Process storage url header before trying body.

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724798#comment-14724798
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2255 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2255/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12361) reorganize repo layout for break from Hadoop code base

2015-08-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12361:
-
Status: Patch Available  (was: Open)

> reorganize repo layout for break from Hadoop code base
> --
>
> Key: HADOOP-12361
> URL: https://issues.apache.org/jira/browse/HADOOP-12361
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-12361.HADOOP-12111.1.sh
>
>
> Reorganize current top level repo to include our starting modules (and only 
> those modules), so that it's easier to start bringing in new contributors:
> * shelldoc
> * releasedocmaker
> * interface audience
> * test-patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12361) reorganize repo layout for break from Hadoop code base

2015-08-31 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-12361:
-
Attachment: HADOOP-12361.HADOOP-12111.1.sh

-01

* create 4 components: shelldoc, audience-annotations, release-doc-maker, and 
test-patch
* move implementations to match
* move dev-support/docs/README.md to top level
* move dev-support/docs/releasedocmaker.md to release-doc-maker/README.md
* delete everything else

Since this just does file moves and deletes (to ease file tracking), additional 
patches will need to address:

* audience annotations needs to be updated for move out of hadoop
* we refer to both "test-patch" and "Yetus Precommit" in our docs/files. pick 
one
* top level readme doesn't provide info on all the components
* top level LICENSE and NOTICE files need to be updated

> reorganize repo layout for break from Hadoop code base
> --
>
> Key: HADOOP-12361
> URL: https://issues.apache.org/jira/browse/HADOOP-12361
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Attachments: HADOOP-12361.HADOOP-12111.1.sh
>
>
> Reorganize current top level repo to include our starting modules (and only 
> those modules), so that it's easier to start bringing in new contributors:
> * shelldoc
> * releasedocmaker
> * interface audience
> * test-patch



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724627#comment-14724627
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1059 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1059/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-08-31 Thread Jim VanOosten (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724735#comment-14724735
 ] 

Jim VanOosten commented on HADOOP-10420:


Gil, HADOOP-10420-009.patch has the fix that you outlined above.

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12369) Point hadoop-project/pom.xml java.security.krb5.conf within target folder

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12369:
-
Status: Patch Available  (was: Open)

> Point hadoop-project/pom.xml java.security.krb5.conf within target folder
> -
>
> Key: HADOOP-12369
> URL: https://issues.apache.org/jira/browse/HADOOP-12369
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hadoop-12369.001.patch
>
>
> This is used in the unit test environment, pointing within the src tree is 
> naughty. The fix is simply to update to point within the target directory 
> instead:
> {noformat}
> -
> ${basedir}/src/test/resources/krb5.conf
> +
> ${test.cache.data}/krb5.conf
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal reopened HADOOP-10571:

  Assignee: (was: Arpit Agarwal)

Thanks Steve. The original patch is quite out of date. I'll post an updated 
patch if I get some time. Leaving unassigned for now.

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2015-08-31 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-10571:
---
Labels:   (was: BB2015-05-TBR)

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
> Attachments: HADOOP-10571.01.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724784#comment-14724784
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #316 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/316/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12369) Point hadoop-project/pom.xml java.security.krb5.conf within target folder

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12369:
-
Attachment: hadoop-12369.001.patch

Patch attached. Found the same issue in the yarn registry pom (it has quite 
some dupes with hadoop-project's pom for some reason) and fixed it there too.

Ran a few assorted tests to get a feel, passed:

{noformat}
-> % mvn test 
-Dtest=TestBlockToken,TestCredentials,TestAMRMTokens,TestRPC,TestWebHdfsUrl
{noformat}

> Point hadoop-project/pom.xml java.security.krb5.conf within target folder
> -
>
> Key: HADOOP-12369
> URL: https://issues.apache.org/jira/browse/HADOOP-12369
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: hadoop-12369.001.patch
>
>
> This is used in the unit test environment, pointing within the src tree is 
> naughty. The fix is simply to update to point within the target directory 
> instead:
> {noformat}
> -
> ${basedir}/src/test/resources/krb5.conf
> +
> ${test.cache.data}/krb5.conf
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724741#comment-14724741
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #325 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/325/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724745#comment-14724745
 ] 

Hudson commented on HADOOP-12368:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #332 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/332/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12368) Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract

2015-08-31 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724771#comment-14724771
 ] 

Hudson commented on HADOOP-12368:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2274 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2274/])
HADOOP-12368. Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract. 
(wang: rev 7ad3556ed38560585579172aa68356f37b2288c8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFsBaseTest.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/viewfs/ViewFileSystemBaseTest.java


> Mark ViewFileSystemBaseTest and ViewFsBaseTest as abstract
> --
>
> Key: HADOOP-12368
> URL: https://issues.apache.org/jira/browse/HADOOP-12368
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: hdfs-8997.001.patch
>
>
> These are test base classes that need to be subclassed to run, can mark as 
> abstract.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-12367:
-
Attachment: HADOOP-12367.002.patch

New patch attached, forgot to update the rat exclude. TestCodec also passed for 
me locally, Jenkins is still hung archiving the test results.

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch, HADOOP-12367.002.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12369) Point hadoop-project/pom.xml java.security.krb5.conf within target folder

2015-08-31 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-12369:


 Summary: Point hadoop-project/pom.xml java.security.krb5.conf 
within target folder
 Key: HADOOP-12369
 URL: https://issues.apache.org/jira/browse/HADOOP-12369
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.1
Reporter: Andrew Wang
Assignee: Andrew Wang
Priority: Minor


This is used in the unit test environment, pointing within the src tree is 
naughty. The fix is simply to update to point within the target directory 
instead:

{noformat}
-
${basedir}/src/test/resources/krb5.conf
+
${test.cache.data}/krb5.conf
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12367) Move TestFileUtil's test resources to resources folder

2015-08-31 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14724826#comment-14724826
 ] 

Hadoop QA commented on HADOOP-12367:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  15m  6s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 38s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 47s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 23s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | common tests |  23m  5s | Tests passed in 
hadoop-common. |
| | |  58m  2s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12753456/HADOOP-12367.002.patch 
|
| Optional Tests | javadoc javac unit |
| git revision | trunk / 7ad3556 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7568/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7568/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7568/console |


This message was automatically generated.

> Move TestFileUtil's test resources to resources folder
> --
>
> Key: HADOOP-12367
> URL: https://issues.apache.org/jira/browse/HADOOP-12367
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.7.1
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>Priority: Minor
> Attachments: HADOOP-12367.001.patch, HADOOP-12367.002.patch
>
>
> Little cleanup. Right now we do an antrun step to copy the tar and tgz from 
> the source folder to target folder. We can skip this by just putting it in 
> the resources folder like all the other test resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12359) hdfs dfs -getmerge docs are wrong

2015-08-31 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14723425#comment-14723425
 ] 

Brahma Reddy Battula commented on HADOOP-12359:
---

[~jagadesh.kiran] thanks for updated patch..It's looks good me..+1 (non-binding)

> hdfs dfs -getmerge docs are wrong
> -
>
> Key: HADOOP-12359
> URL: https://issues.apache.org/jira/browse/HADOOP-12359
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 1.2.1, 0.23.11, 2.4.1, 2.5.2, 2.6.0, 2.7.0, 2.7.1
>Reporter: Daniel Templeton
>Assignee: Jagadesh Kiran N
> Attachments: HADOOP-12359-01.patch, HADOOP-12359-02.patch, 
> HADOOP-12359-03.patch
>
>
> The docs at:
> 
> http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-common/FileSystemShell.html#getmerge
> say that addnl is a valid parameter, but as of HADOOP-7348, it's been 
> replaced with -nl.  The docs should be updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)