[jira] [Updated] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12767:
-
Status: Patch Available  (was: Open)

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang reassigned HADOOP-12767:


Assignee: Wei-Chiu Chuang

> update apache httpclient version to the latest 4.5 for security
> ---
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Artem Aliev
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12767.001.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12710:
-
Status: Patch Available  (was: Open)

Forgot to submit this patch :)

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12710.001.patch
>
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137124#comment-15137124
 ] 

Greg Senia commented on HADOOP-9969:


This also affects IBM JDK8...

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137165#comment-15137165
 ] 

Wei-Chiu Chuang commented on HADOOP-12710:
--

Hi [~wheat9] thanks for the comments. However, at this point, 
commons-httpclient is still being used in Hadoop codebase in other places (see 
HADOOP-12552)

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12710.001.patch
>
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12767) update apache httpclient version to the latest 4.5 for security

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137177#comment-15137177
 ] 

Hadoop QA commented on HADOOP-12767:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 6s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 39s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786619/HADOOP-12767.001.patch
 |
| JIRA Issue | HADOOP-12767 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux ae4896b1abf5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d37eb82 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15136726#comment-15136726
 ] 

Hudson commented on HADOOP-12749:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9258 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9258/])
HADOOP-12749. Create a threadpoolexecutor that overrides afterExecute to 
(vvasudev: rev f3bbe0bd020b9efe05d5918ad042d9d4d4b1ca57)
* hadoop-common-project/hadoop-common/CHANGES.txt


> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)
madhumita chakraborty created HADOOP-12780:
--

 Summary: During atomic rename handle crash when one directory has 
been renamed but not file under it.
 Key: HADOOP-12780
 URL: https://issues.apache.org/jira/browse/HADOOP-12780
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/azure
Affects Versions: 2.8.0
Reporter: madhumita chakraborty
Assignee: madhumita chakraborty
Priority: Critical


During atomic folder rename process preperaion we record the proposed change to 
a metadata file (-renamePending.json).
Say we are renaming parent/folderToRename to parent/renamedFolder.
folderToRename has an inner folder innerFolder and innerFolder has a file 
innerFile
Content of the –renamePending.json file will be
{ OldFolderName: /parent/ folderToRename", NewFolderName: 
"parent/renamedFolder", FileList: [ "innerFolder", "innerFolder/innerFile" ] }
Atfirst we rename all files within the source directory and then rename the 
source directory at the last step
The steps are
1.  Atfirst we will rename innerFolder, 
2.  Then rename innerFolder/innerFile 
3.  Then rename source directory folderToRename
Say the process crashes after step 1.
So innerFolder has been renamed. 
Note that Azure storage does not natively support folder. So if a directory 
created by mkdir command, we create an empty placeholder blob with metadata for 
the directory.
So after step 1, the empty blob corresponding to the directory innerFolder has 
been renamed.
When the process comes up, in redo path it will go through the 
–renamePending.json file try to redo the renames. 
For each file in file list of renamePending file it checks if the source file 
exists, if source file exists then it renames the file. When it gets 
innerFolder, it calls filesystem.exists(innerFolder). Now 
filesystem.exists(innerFolder) will return true, because file under that folder 
exists even though the empty blob corresponding th that folder does not exist. 
So it will try to rename this folder, and as the empty blob has already been 
deleted so this fails with exception that “source blob does not exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Description: 
During atomic folder rename process preperaion we record the proposed change to 
a metadata file (-renamePending.json).
Say we are renaming parent/folderToRename to parent/renamedFolder.
folderToRename has an inner folder innerFolder and innerFolder has a file 
innerFile
Content of the –renamePending.json file will be
{   OldFolderName: parent/ folderToRename",
 NewFolderName: "parent/renamedFolder", 
 FileList: [ "innerFolder", "innerFolder/innerFile" ]
 }

Atfirst we rename all files within the source directory and then rename the 
source directory at the last step
The steps are
1.  Atfirst we will rename innerFolder, 
2.  Then rename innerFolder/innerFile 
3.  Then rename source directory folderToRename
Say the process crashes after step 1.
So innerFolder has been renamed. 
Note that Azure storage does not natively support folder. So if a directory 
created by mkdir command, we create an empty placeholder blob with metadata for 
the directory.
So after step 1, the empty blob corresponding to the directory innerFolder has 
been renamed.
When the process comes up, in redo path it will go through the 
–renamePending.json file try to redo the renames. 
For each file in file list of renamePending file it checks if the source file 
exists, if source file exists then it renames the file. When it gets 
innerFolder, it calls filesystem.exists(innerFolder). Now 
filesystem.exists(innerFolder) will return true, because file under that folder 
exists even though the empty blob corresponding th that folder does not exist. 
So it will try to rename this folder, and as the empty blob has already been 
deleted so this fails with exception that “source blob does not exist”.

  was:
During atomic folder rename process preperaion we record the proposed change to 
a metadata file (-renamePending.json).
Say we are renaming parent/folderToRename to parent/renamedFolder.
folderToRename has an inner folder innerFolder and innerFolder has a file 
innerFile
Content of the –renamePending.json file will be
{ OldFolderName: parent/ folderToRename", NewFolderName: 
"parent/renamedFolder", FileList: [ "innerFolder", "innerFolder/innerFile" ] }
Atfirst we rename all files within the source directory and then rename the 
source directory at the last step
The steps are
1.  Atfirst we will rename innerFolder, 
2.  Then rename innerFolder/innerFile 
3.  Then rename source directory folderToRename
Say the process crashes after step 1.
So innerFolder has been renamed. 
Note that Azure storage does not natively support folder. So if a directory 
created by mkdir command, we create an empty placeholder blob with metadata for 
the directory.
So after step 1, the empty blob corresponding to the directory innerFolder has 
been renamed.
When the process comes up, in redo path it will go through the 
–renamePending.json file try to redo the renames. 
For each file in file list of renamePending file it checks if the source file 
exists, if source file exists then it renames the file. When it gets 
innerFolder, it calls filesystem.exists(innerFolder). Now 
filesystem.exists(innerFolder) will return true, because file under that folder 
exists even though the empty blob corresponding th that folder does not exist. 
So it will try to rename this folder, and as the empty blob has already been 
deleted so this fails with exception that “source blob does not exist”.


> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after 

[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Description: 
During atomic folder rename process preperaion we record the proposed change to 
a metadata file (-renamePending.json).
Say we are renaming parent/folderToRename to parent/renamedFolder.
folderToRename has an inner folder innerFolder and innerFolder has a file 
innerFile
Content of the –renamePending.json file will be
{ OldFolderName: parent/ folderToRename", NewFolderName: 
"parent/renamedFolder", FileList: [ "innerFolder", "innerFolder/innerFile" ] }
Atfirst we rename all files within the source directory and then rename the 
source directory at the last step
The steps are
1.  Atfirst we will rename innerFolder, 
2.  Then rename innerFolder/innerFile 
3.  Then rename source directory folderToRename
Say the process crashes after step 1.
So innerFolder has been renamed. 
Note that Azure storage does not natively support folder. So if a directory 
created by mkdir command, we create an empty placeholder blob with metadata for 
the directory.
So after step 1, the empty blob corresponding to the directory innerFolder has 
been renamed.
When the process comes up, in redo path it will go through the 
–renamePending.json file try to redo the renames. 
For each file in file list of renamePending file it checks if the source file 
exists, if source file exists then it renames the file. When it gets 
innerFolder, it calls filesystem.exists(innerFolder). Now 
filesystem.exists(innerFolder) will return true, because file under that folder 
exists even though the empty blob corresponding th that folder does not exist. 
So it will try to rename this folder, and as the empty blob has already been 
deleted so this fails with exception that “source blob does not exist”.

  was:
During atomic folder rename process preperaion we record the proposed change to 
a metadata file (-renamePending.json).
Say we are renaming parent/folderToRename to parent/renamedFolder.
folderToRename has an inner folder innerFolder and innerFolder has a file 
innerFile
Content of the –renamePending.json file will be
{ OldFolderName: /parent/ folderToRename", NewFolderName: 
"parent/renamedFolder", FileList: [ "innerFolder", "innerFolder/innerFile" ] }
Atfirst we rename all files within the source directory and then rename the 
source directory at the last step
The steps are
1.  Atfirst we will rename innerFolder, 
2.  Then rename innerFolder/innerFile 
3.  Then rename source directory folderToRename
Say the process crashes after step 1.
So innerFolder has been renamed. 
Note that Azure storage does not natively support folder. So if a directory 
created by mkdir command, we create an empty placeholder blob with metadata for 
the directory.
So after step 1, the empty blob corresponding to the directory innerFolder has 
been renamed.
When the process comes up, in redo path it will go through the 
–renamePending.json file try to redo the renames. 
For each file in file list of renamePending file it checks if the source file 
exists, if source file exists then it renames the file. When it gets 
innerFolder, it calls filesystem.exists(innerFolder). Now 
filesystem.exists(innerFolder) will return true, because file under that folder 
exists even though the empty blob corresponding th that folder does not exist. 
So it will try to rename this folder, and as the empty blob has already been 
deleted so this fails with exception that “source blob does not exist”.


> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> { OldFolderName: parent/ folderToRename", NewFolderName: 
> "parent/renamedFolder", FileList: [ "innerFolder", "innerFolder/innerFile" ] }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has 

[jira] [Commented] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Sidharta Seethana (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15136730#comment-15136730
 ] 

Sidharta Seethana commented on HADOOP-12749:


Thanks, [~vvasudev]. 

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-12749:
---
   Resolution: Fixed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2. Thanks [~sidharta-s]!

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-08 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138349#comment-15138349
 ] 

Vishwajeet Dusane commented on HADOOP-12666:


[~ajfabbri] Thanks for the comments. i agree with the comments. 
1. I removed super() invocation.
2. PrivateAzureDataLakeFileSystem is public so that AdlFileSystem can extend in 
different namespace. We are having similar discussion on the namespace as you 
suggested. I will update the namespaces as well.
3. Should i update in 
hadoop-common-project/hadoop-common/src/main/resources/core-default.xml ? Since 
these configuration are specific to ADL file system only.

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Status: Patch Available  (was: Open)

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Status: Open  (was: Patch Available)

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Attachment: (was: HADOOP-12780.001.patch)

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Attachment: HADOOP-12780.001.patch

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138372#comment-15138372
 ] 

Hadoop QA commented on HADOOP-12780:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s 
{color} | {color:red} hadoop-tools/hadoop-azure: patch generated 1 new + 45 
unchanged - 0 fixed = 46 total (was 45) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786990/HADOOP-12780.001.patch
 |
| JIRA Issue | HADOOP-12780 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fe906c2f6b36 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / acac729 |
| Default Java 

[jira] [Updated] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12780:
---
Status: Patch Available  (was: Open)

> During atomic rename handle crash when one directory has been renamed but not 
> file under it.
> 
>
> Key: HADOOP-12780
> URL: https://issues.apache.org/jira/browse/HADOOP-12780
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12780.001.patch
>
>
> During atomic folder rename process preperaion we record the proposed change 
> to a metadata file (-renamePending.json).
> Say we are renaming parent/folderToRename to parent/renamedFolder.
> folderToRename has an inner folder innerFolder and innerFolder has a file 
> innerFile
> Content of the –renamePending.json file will be
> {   OldFolderName: parent/ folderToRename",
>  NewFolderName: "parent/renamedFolder", 
>  FileList: [ "innerFolder", "innerFolder/innerFile" ]
>  }
> Atfirst we rename all files within the source directory and then rename the 
> source directory at the last step
> The steps are
> 1.Atfirst we will rename innerFolder, 
> 2.Then rename innerFolder/innerFile 
> 3.Then rename source directory folderToRename
> Say the process crashes after step 1.
> So innerFolder has been renamed. 
> Note that Azure storage does not natively support folder. So if a directory 
> created by mkdir command, we create an empty placeholder blob with metadata 
> for the directory.
> So after step 1, the empty blob corresponding to the directory innerFolder 
> has been renamed.
> When the process comes up, in redo path it will go through the 
> –renamePending.json file try to redo the renames. 
> For each file in file list of renamePending file it checks if the source file 
> exists, if source file exists then it renames the file. When it gets 
> innerFolder, it calls filesystem.exists(innerFolder). Now 
> filesystem.exists(innerFolder) will return true, because file under that 
> folder exists even though the empty blob corresponding th that folder does 
> not exist. So it will try to rename this folder, and as the empty blob has 
> already been deleted so this fails with exception that “source blob does not 
> exist”.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-08 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138370#comment-15138370
 ] 

Vinayakumar B commented on HADOOP-12769:


Thanks for looking [~cnauroth] and [~cmccabe].
I agree that cmake will be better.
However, I believe, making cmake maven plugin support windows might be required.

> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138197#comment-15138197
 ] 

Hadoop QA commented on HADOOP-12782:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 37s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 3s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 24s 
{color} | {color:red} root-jdk1.8.0_72 with JDK v1.8.0_72 generated 2 new + 738 
unchanged - 2 fixed = 740 total (was 740) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 23s 
{color} | {color:red} root-jdk1.7.0_95 with JDK v1.7.0_95 generated 2 new + 733 
unchanged - 2 fixed = 735 total (was 735) {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 59s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-common-project/hadoop-common: patch generated 4 
new + 34 unchanged - 1 fixed = 38 total (was 35) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hadoop-common-project/hadoop-common generated 1 new + 0 
unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 30s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 59s {color} 
| 

[jira] [Commented] (HADOOP-12780) During atomic rename handle crash when one directory has been renamed but not file under it.

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12780?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15138416#comment-15138416
 ] 

Hadoop QA commented on HADOOP-12780:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 7s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 45s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786995/HADOOP-12780.001.patch
 |
| JIRA Issue | HADOOP-12780 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 065bbaccf7e7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / acac729 |
| Default Java | 1.7.0_91 |
| Multi-JDK versions |  

[jira] [Updated] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated HADOOP-12781:
-
Attachment: HADOOP-12781.1.diff

> Enable fencing for logjam-protected ssh servers
> ---
>
> Key: HADOOP-12781
> URL: https://issues.apache.org/jira/browse/HADOOP-12781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.2
> Environment: If a site uses logjam protected ssh servers no common 
> ciphers can be found and fencing breaks because the fencing process cannot be 
> initiated by zkfc.
>Reporter: Olaf Flebbe
>Assignee: Olaf Flebbe
> Attachments: HADOOP-12781.1.diff, HADOOP-12781.1.patch
>
>
> Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
> protection. See http://www.jcraft.com/jsch/ChangeLog.
> Since there are no developer visible changes, updating pom is sufficient.
> Doublechecked in my environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev reopened HADOOP-12749:


Doh! Reopening ticket because the new files didn't get added to the repository 
in the commit. My apologies [~sidharta-s]

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated HADOOP-12781:
-
Attachment: HADOOP-12781.1.patch

> Enable fencing for logjam-protected ssh servers
> ---
>
> Key: HADOOP-12781
> URL: https://issues.apache.org/jira/browse/HADOOP-12781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.2
> Environment: If a site uses logjam protected ssh servers no common 
> ciphers can be found and fencing breaks because the fencing process cannot be 
> initiated by zkfc.
>Reporter: Olaf Flebbe
>Assignee: Olaf Flebbe
> Attachments: HADOOP-12781.1.patch
>
>
> Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
> protection. See http://www.jcraft.com/jsch/ChangeLog.
> Since there are no developer visible changes, updating pom is sufficient.
> Doublechecked in my environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated HADOOP-12781:
-
Attachment: (was: HADOOP-12781.1.diff)

> Enable fencing for logjam-protected ssh servers
> ---
>
> Key: HADOOP-12781
> URL: https://issues.apache.org/jira/browse/HADOOP-12781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.2
> Environment: If a site uses logjam protected ssh servers no common 
> ciphers can be found and fencing breaks because the fencing process cannot be 
> initiated by zkfc.
>Reporter: Olaf Flebbe
>Assignee: Olaf Flebbe
> Attachments: HADOOP-12781.1.patch
>
>
> Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
> protection. See http://www.jcraft.com/jsch/ChangeLog.
> Since there are no developer visible changes, updating pom is sufficient.
> Doublechecked in my environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Olaf Flebbe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Olaf Flebbe updated HADOOP-12781:
-
Release Note: Simple patch, please review
  Status: Patch Available  (was: Open)

> Enable fencing for logjam-protected ssh servers
> ---
>
> Key: HADOOP-12781
> URL: https://issues.apache.org/jira/browse/HADOOP-12781
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auto-failover
>Affects Versions: 2.7.2
> Environment: If a site uses logjam protected ssh servers no common 
> ciphers can be found and fencing breaks because the fencing process cannot be 
> initiated by zkfc.
>Reporter: Olaf Flebbe
>Assignee: Olaf Flebbe
> Attachments: HADOOP-12781.1.patch
>
>
> Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
> protection. See http://www.jcraft.com/jsch/ChangeLog.
> Since there are no developer visible changes, updating pom is sufficient.
> Doublechecked in my environment.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Olaf Flebbe (JIRA)
Olaf Flebbe created HADOOP-12781:


 Summary: Enable fencing for logjam-protected ssh servers
 Key: HADOOP-12781
 URL: https://issues.apache.org/jira/browse/HADOOP-12781
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover
Affects Versions: 2.7.2
 Environment: If a site uses logjam protected ssh servers no common 
ciphers can be found and fencing breaks because the fencing process cannot be 
initiated by zkfc.
Reporter: Olaf Flebbe
Assignee: Olaf Flebbe


Version 0.15.3 of jsch incorporates changes to add ciphers for logjam 
protection. See http://www.jcraft.com/jsch/ChangeLog.

Since there are no developer visible changes, updating pom is sufficient.

Doublechecked in my environment.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12781) Enable fencing for logjam-protected ssh servers

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137043#comment-15137043
 ] 

Hadoop QA commented on HADOOP-12781:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 8s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 6s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 5s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.8.0_72. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 7s 
{color} | {color:green} hadoop-project in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 56s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12786820/HADOOP-12781.1.patch |
| JIRA Issue | HADOOP-12781 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 3f29e8c6838c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / d37eb82 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  

[jira] [Commented] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Varun Vasudev (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137026#comment-15137026
 ] 

Varun Vasudev commented on HADOOP-12749:


Added missing files.

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15136989#comment-15136989
 ] 

Hadoop QA commented on HADOOP-12769:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
1s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 28s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 39s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped branch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
6s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s 
{color} | {color:red} root: patch generated 3 new + 4 unchanged - 0 fixed = 7 
total (was 4) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 37s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s 
{color} | {color:blue} Skipped patch modules with no Java source: 
hadoop-hdfs-project/hadoop-hdfs-native-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hadoop-maven-plugins in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | 

[jira] [Resolved] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev resolved HADOOP-12749.

Resolution: Fixed

> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12749) Create a threadpoolexecutor that overrides afterExecute to log uncaught exceptions/errors

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137038#comment-15137038
 ] 

Hudson commented on HADOOP-12749:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9260 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9260/])
Revert "HADOOP-12749. Create a threadpoolexecutor that overrides (vvasudev: rev 
af218101e50de0260ab74e5d2f96227a0541e121)
* hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-12749. Create a threadpoolexecutor that overrides afterExecute to 
(vvasudev: rev d37eb828ffa09d55936964f555ea351b946d286e)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/HadoopScheduledThreadPoolExecutor.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/HadoopThreadPoolExecutor.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/package-info.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/ExecutorHelper.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/concurrent/HadoopExecutors.java


> Create a threadpoolexecutor that overrides afterExecute to log uncaught 
> exceptions/errors
> -
>
> Key: HADOOP-12749
> URL: https://issues.apache.org/jira/browse/HADOOP-12749
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Sidharta Seethana
>Assignee: Sidharta Seethana
> Fix For: 2.9.0
>
> Attachments: HADOOP-12749.001.patch, HADOOP-12749.002.patch, 
> HADOOP-12749.003.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-08 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-12769:
---
Attachment: HADOOP-12769-02.patch

Attaching the patch.

1. Made 'cmake-compile' windows supported.
2. used 'cmake-compile' and 'msbuild-compile' in hadoop-hdfs-native-client.

3. Fixed checkstyle warnings and whitespaces.

> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-08 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12782:


 Summary: Faster LDAP group name resolution with ActiveDirectory
 Key: HADOOP-12782
 URL: https://issues.apache.org/jira/browse/HADOOP-12782
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


The typical LDAP group name resolution works well under typical scenarios. 
However, we have seen cases where a user is mapped to many groups (in an 
extreme case, a user is mapped to more than 100 groups). The way it's being 
implemented now makes this case super slow resolving groups from 
ActiveDirectory.

The current LDAP group resolution implementation sends two queries to a 
ActiveDirectory server. The first query returns a user object, which contains 
DN (distinguished name). The second query looks for groups where the user DN is 
a member. If a user is mapped to many groups, the second query returns all 
group objects associated with the user, and is thus very slow.

After studying a user object in ActiveDirectory, I found a user object actually 
contains a "memberOf" field, which is the DN of all group objects where the 
user belongs to. Assuming that an organization has no recursive group relation 
(that is, a user A is a member of group G1, and group G1 is a member of group 
G2), we can use this properties to avoid the second query, which can 
potentially run very slow.

I propose that we add a configuration to only enable this feature for users who 
want to reduce group resolution time and who does not have recursive groups, so 
that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12775) RollingFileSystemSink doesn't work on secure clusters

2016-02-08 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137331#comment-15137331
 ] 

Daniel Templeton commented on HADOOP-12775:
---

How about this on the tests...  I'll take HDFS-9637 back to just covering 
HADOOP-12702 and HADOOP-12759, and then I'll move this JIRA over to HDFS and 
add the tests.  Sound good?

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HADOOP-12775
> URL: https://issues.apache.org/jira/browse/HADOOP-12775
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12775) RollingFileSystemSink doesn't work on secure clusters

2016-02-08 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137327#comment-15137327
 ] 

Daniel Templeton commented on HADOOP-12775:
---

Thanks, [~andrew.wang].  The metrics system takes a class name as the 
source/sink to start.  It's then instantiated in complete isolation.  That 
works fine in reality, but it's ugly for testing.  The alternative to the 
static {{suppliedConf}} is to have the test write out a configuration file and 
then convince the metrics system to read it.  That's also pretty ugly.

On the {{suppliedFilesystem}} variable, I still don't exactly understand why I 
need it.  When using a secure mini-cluster, if I try to use 
{{FileSystem.get()}}, all operations fail with a checksum error.  Know anything 
about that?

I have thought about making the interval configurable.  And now that you've 
asked the question, I guess I gotta. :)  Would number of minutes be a 
reasonable way to configure the rollover interval?

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HADOOP-12775
> URL: https://issues.apache.org/jira/browse/HADOOP-12775
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12775) RollingFileSystemSink doesn't work on secure clusters

2016-02-08 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137451#comment-15137451
 ] 

Daniel Templeton commented on HADOOP-12775:
---

Mind if I make the configurable interval a separate JIRA?

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HADOOP-12775
> URL: https://issues.apache.org/jira/browse/HADOOP-12775
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-02-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137442#comment-15137442
 ] 

Haohui Mai commented on HADOOP-12710:
-

If it contains security vulnerabilities we should definitely revisit the 
decision in HADOOP-12552.

> Remove dependency on commons-httpclient for TestHttpServerLogs
> --
>
> Key: HADOOP-12710
> URL: https://issues.apache.org/jira/browse/HADOOP-12710
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12710.001.patch
>
>
> Commons-httpclient has long been EOL. Critically, it has several security 
> vulnerabilities: CVE-2012-5783 
> http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2012-5783.
> I saw a recent commit that depends on commons-httpclient for 
> TestHttpServerLogs (HADOOP-12625) This JIRA intends to replace the dependency 
> with httpclient APIs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12710) Remove dependency on commons-httpclient for TestHttpServerLogs

2016-02-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137355#comment-15137355
 ] 

Hadoop QA commented on HADOOP-12710:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 59s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 58s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s 
{color} | {color:green} trunk passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed with JDK v1.8.0_72 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 53s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_72. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 33s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 74m 50s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_72 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12782346/HADOOP-12710.001.patch
 |
| JIRA Issue | HADOOP-12710 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit 

[jira] [Commented] (HADOOP-12752) Improve diagnostics/use of envvar/sysprop credential propagation

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137359#comment-15137359
 ] 

Hudson commented on HADOOP-12752:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #9261 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9261/])
HADOOP-12752. Improve diagnostics/use of envvar/sysprop credential (cnauroth: 
rev cf3261570ae139c177225af165557038a9280a5d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java


> Improve diagnostics/use of envvar/sysprop credential propagation
> 
>
> Key: HADOOP-12752
> URL: https://issues.apache.org/jira/browse/HADOOP-12752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12752-001.patch, HADOOP-12752-002.patch
>
>
> * document the system property {{hadoop.token.files}}. 
> * document the env var {{HADOOP_TOKEN_FILE_LOCATION}}.
> * When UGI inits tokens off that or the env var , log this fact
> * when trying to load a file referenced in the env var (a) trim it and (b) 
> check for it existing, failing with a message referring to the ENV var as 
> well as the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11613) Remove httpclient dependency from hadoop-azure

2016-02-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137386#comment-15137386
 ] 

Brahma Reddy Battula commented on HADOOP-11613:
---

[~ajisakaa] thanks for the patch..+1 (non-binding) on 05 patch.

> Remove httpclient dependency from hadoop-azure
> --
>
> Key: HADOOP-11613
> URL: https://issues.apache.org/jira/browse/HADOOP-11613
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Akira AJISAKA
>Assignee: Brahma Reddy Battula
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11613-001.patch, HADOOP-11613-002.patch, 
> HADOOP-11613-003.patch, HADOOP-11613.004.patch, HADOOP-11613.05.patch, 
> HADOOP-11613.patch
>
>
> Remove httpclient dependency from MockStorageInterface.java.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12752) Improve diagnostics/use of envvar/sysprop credential propagation

2016-02-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12752:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v002.  The lack of tests is fine since this is meant to improve 
diagnostics and logging under failure.  I have committed this to trunk, 
branch-2 and branch-2.8.  [~ste...@apache.org], thank you for the patch.

> Improve diagnostics/use of envvar/sysprop credential propagation
> 
>
> Key: HADOOP-12752
> URL: https://issues.apache.org/jira/browse/HADOOP-12752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 2.7.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12752-001.patch, HADOOP-12752-002.patch
>
>
> * document the system property {{hadoop.token.files}}. 
> * document the env var {{HADOOP_TOKEN_FILE_LOCATION}}.
> * When UGI inits tokens off that or the env var , log this fact
> * when trying to load a file referenced in the env var (a) trim it and (b) 
> check for it existing, failing with a message referring to the ENV var as 
> well as the file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137363#comment-15137363
 ] 

Chris Nauroth commented on HADOOP-12769:


Hi [~vinayrpet].  This is cool stuff!  Thank you for posting the patch.

However, I wonder if we should instead focus effort on conversion of the 
hadoop-common Windows native build to CMake.  That effort is tracked in 
HADOOP-11080.  We have already established that CMake is feasible for the 
Windows builds, because I set up the libhdfs build to do it in HDFS-573.  I 
also just tested {{mvn compile}} on hadoop-hdfs-native-client on Windows, and I 
can see that incremental rebuilds are working correctly for it.

> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-02-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12699:
---
Attachment: HADOOP-12699.10.patch

Patch 10 address all the comments above, and additionally fixed a couple of 
typos in the existing paragraph.
Followed BUILDING.txt and verified that the generated html looks Okay.

[~andrew.wang], could you please give another review? I made my best in 
explaining based on my understanding, but with more knowledge please feel free 
to provide your comment. Thanks!
I'll attach the generated html for easier reviewing.

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.02.patch, 
> HADOOP-12699.03.patch, HADOOP-12699.04.patch, HADOOP-12699.06.patch, 
> HADOOP-12699.07.patch, HADOOP-12699.08.patch, HADOOP-12699.09.patch, 
> HADOOP-12699.10.patch, HADOOP-12699.repro.2, HADOOP-12699.repro.patch
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12699) TestKMS#testKMSProvider intermittently fails during 'test rollover draining'

2016-02-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-12699:
---
Attachment: generated.10.html

> TestKMS#testKMSProvider intermittently fails during 'test rollover draining'
> 
>
> Key: HADOOP-12699
> URL: https://issues.apache.org/jira/browse/HADOOP-12699
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-12699.01.patch, HADOOP-12699.02.patch, 
> HADOOP-12699.03.patch, HADOOP-12699.04.patch, HADOOP-12699.06.patch, 
> HADOOP-12699.07.patch, HADOOP-12699.08.patch, HADOOP-12699.09.patch, 
> HADOOP-12699.10.patch, HADOOP-12699.repro.2, HADOOP-12699.repro.patch, 
> generated.10.html
>
>
> I've seen several failures of testKMSProvider, all failed in the following 
> snippet:
> {code}
> // test rollover draining
> KeyProviderCryptoExtension kpce = KeyProviderCryptoExtension.
> createKeyProviderCryptoExtension(kp);
> .
> EncryptedKeyVersion ekv1 = kpce.generateEncryptedKey("k6");
> kpce.rollNewVersion("k6");
> EncryptedKeyVersion ekv2 = kpce.generateEncryptedKey("k6");
> Assert.assertNotEquals(ekv1.getEncryptionKeyVersionName(),
> ekv2.getEncryptionKeyVersionName());
> {code}
> with error message
> {quote}Values should be different. Actual: k6@0{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12775) RollingFileSystemSink doesn't work on secure clusters

2016-02-08 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137679#comment-15137679
 ] 

Robert Kanter commented on HADOOP-12775:


I'm not sure about the checksum error itself, but how did you setup the secure 
mini cluster?  Did you use MiniKDC?

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HADOOP-12775
> URL: https://issues.apache.org/jira/browse/HADOOP-12775
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137486#comment-15137486
 ] 

Greg Senia commented on HADOOP-9969:


[~crystal_gaoyu] and [~xinwei] I noticed it's stated that there are some other 
side-effects? Please advise. 

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.6.0, 2.6.1, 2.8.0, 2.7.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12769) Hadoop maven plugin for msbuild compilations

2016-02-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137670#comment-15137670
 ] 

Colin Patrick McCabe commented on HADOOP-12769:
---

+1 for converting the Windows build to CMake.  One of the great advantages of 
CMake is that it is cross-platform

> Hadoop maven plugin for msbuild compilations
> 
>
> Key: HADOOP-12769
> URL: https://issues.apache.org/jira/browse/HADOOP-12769
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: native
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
> Attachments: HADOOP-12769-01.patch, HADOOP-12769-02.patch
>
>
> Currently, all windows native libraries generations using msbuild, happens 
> for every invocation of 'mvn install'
> Idea is to, make this as plugin, and make as incremental.
> i.e. Rebuild only when any of the source is changed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12746) ReconfigurableBase should update the cached configuration

2016-02-08 Thread Xiaobing Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137705#comment-15137705
 ] 

Xiaobing Zhou commented on HADOOP-12746:


Thanks [~arpitagarwal] for the patch! Overall, it's good.
Could you explain why 
The behavior of ReconfigurableBase#ReconfigurationThread#run
{noformat}
133   String effectiveValue =
134   parent.reconfigurePropertyImpl(change.prop, 
change.newVal);
135   if (change.newVal != null) {
136 oldConf.set(change.prop, effectiveValue);
137   }
{noformat}

is inconsistent with that of ReconfigurableBase#reconfigureProperty
{noformat}
if (newVal != null) {
  getConf().set(property, effectiveValue);
} else {
  getConf().unset(property);
}
{noformat}

BTW, many checkstyle should be fixed. Thanks!

> ReconfigurableBase should update the cached configuration
> -
>
> Key: HADOOP-12746
> URL: https://issues.apache.org/jira/browse/HADOOP-12746
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HADOOP-12746.01.patch
>
>
> {{ReconfigurableBase}} does not always update the cached configuration after 
> a property is reconfigured.
> The older {{#reconfigureProperty}} does so however {{ReconfigurationThread}} 
> does not.
> See discussion on HDFS-7035 for more background.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-9969:
---
Affects Version/s: 2.7.2

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.7.2
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-9969:
---
Affects Version/s: 2.8.0
   2.6.0
   2.6.1
   2.6.3

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.6.0, 2.6.1, 2.8.0, 2.7.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-9969:
---
Affects Version/s: 2.5.0
   2.5.1
   2.5.2
   2.7.1
   2.6.2

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12773) HBase classes fail to load with client/job classloader enabled

2016-02-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137521#comment-15137521
 ] 

Kihwal Lee commented on HADOOP-12773:
-

+1 makes sense.

> HBase classes fail to load with client/job classloader enabled
> --
>
> Key: HADOOP-12773
> URL: https://issues.apache.org/jira/browse/HADOOP-12773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12773.01.patch
>
>
> Currently if a user uses HBase and enables the client/job classloader, the 
> job fails to load HBase classes. For example,
> {noformat}
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hbase/client/HBaseAdmin;
>   at java.lang.Class.getDeclaredFields0(Native Method)
>   at java.lang.Class.privateGetDeclaredFields(Class.java:2509)
>   at java.lang.Class.getDeclaredField(Class.java:1959)
>   at 
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
>   at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
>   at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369)
> {noformat}
> It is because the HBase classes (org.apache.hadoop.hbase.\*) meet the system 
> classes criteria which are supposed to be loaded strictly from the base 
> classloader. But hadoop does not provide HBase as a dependency.
> We should exclude the HBase classes from the system classes until/unless 
> HBase is provided by a future version of hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12775) RollingFileSystemSink doesn't work on secure clusters

2016-02-08 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12775?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137491#comment-15137491
 ] 

Andrew Wang commented on HADOOP-12775:
--

JIRA plan SGTM. We use milliseconds for most intervals, so recommend sticking 
with that. No idea about the filesystem, but if it's only when using a secure 
minicluster, it's probably some UGI issue. [~rkanter] might know more about 
such things.

> RollingFileSystemSink doesn't work on secure clusters
> -
>
> Key: HADOOP-12775
> URL: https://issues.apache.org/jira/browse/HADOOP-12775
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
>Priority: Critical
> Attachments: HADOOP-12775.001.patch, HADOOP-12775.002.patch, 
> HADOOP-12775.003.patch
>
>
> If HDFS has kerberos enabled, the sink cannot write its logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9969) TGT expiration doesn't trigger Kerberos relogin

2016-02-08 Thread Greg Senia (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9969?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Senia updated HADOOP-9969:
---
Affects Version/s: (was: 2.7.2)
   (was: 2.5.1)

> TGT expiration doesn't trigger Kerberos relogin
> ---
>
> Key: HADOOP-9969
> URL: https://issues.apache.org/jira/browse/HADOOP-9969
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.1.0-beta, 2.5.0, 2.5.2, 2.6.0, 2.6.1, 2.8.0, 2.7.1, 
> 2.6.2, 2.6.3
> Environment: IBM JDK7
>Reporter: Yu Gao
> Attachments: HADOOP-9969.patch, JobTracker.log
>
>
> In HADOOP-9698 & HADOOP-9850, RPC client and Sasl client have been changed to 
> respect the auth method advertised from server, instead of blindly attempting 
> the configured one at client side. However, when TGT has expired, an 
> exception will be thrown from SaslRpcClient#createSaslClient(SaslAuth 
> authType), and at this time the authMethod still holds the initial value 
> which is SIMPLE and never has a chance to be updated with the expected one 
> requested by server, so kerberos relogin will not happen.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12723) S3A: Add ability to plug in any AWSCredentialsProvider

2016-02-08 Thread Steven Wong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137592#comment-15137592
 ] 

Steven Wong commented on HADOOP-12723:
--

Is there anything else needed before a committer will review the patch? Please 
let me know if there is. I'm assuming folks are just busy. Thanks.

> S3A: Add ability to plug in any AWSCredentialsProvider
> --
>
> Key: HADOOP-12723
> URL: https://issues.apache.org/jira/browse/HADOOP-12723
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 2.7.1
>Reporter: Steven Wong
>Assignee: Steven Wong
> Attachments: HADOOP-12723.0.patch
>
>
> Although S3A currently has built-in support for 
> {{org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider}}, 
> {{com.amazonaws.auth.InstanceProfileCredentialsProvider}}, and 
> {{org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider}}, it does not 
> support any other credentials provider that implements the 
> {{com.amazonaws.auth.AWSCredentialsProvider}} interface. Supporting the 
> ability to plug in any {{com.amazonaws.auth.AWSCredentialsProvider}} instance 
> will expand the options for S3 credentials, such as:
> * temporary credentials from STS, e.g. via 
> {{com.amazonaws.auth.STSSessionCredentialsProvider}}
> * IAM role-based credentials, e.g. via 
> {{com.amazonaws.auth.STSAssumeRoleSessionCredentialsProvider}}
> * a custom credentials provider that satisfies one's own needs, e.g. 
> bucket-specific credentials, user-specific credentials, etc.
> To support this, we can add a configuration for the fully qualified class 
> name of a credentials provider, to be loaded by {{S3AFileSystem.initialize}} 
> and added to its credentials provider chain.
> The configured credentials provider should implement 
> {{com.amazonaws.auth.AWSCredentialsProvider}} and have a constructor that 
> accepts {{(URI uri, Configuration conf)}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12773) HBase classes fail to load with client/job classloader enabled

2016-02-08 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12773:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.6.5
   2.9.0
   2.7.3
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed the patch. Thanks for your +1 [~kihwal]!

> HBase classes fail to load with client/job classloader enabled
> --
>
> Key: HADOOP-12773
> URL: https://issues.apache.org/jira/browse/HADOOP-12773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.8.0, 2.7.3, 2.9.0, 2.6.5
>
> Attachments: HADOOP-12773.01.patch
>
>
> Currently if a user uses HBase and enables the client/job classloader, the 
> job fails to load HBase classes. For example,
> {noformat}
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hbase/client/HBaseAdmin;
>   at java.lang.Class.getDeclaredFields0(Native Method)
>   at java.lang.Class.privateGetDeclaredFields(Class.java:2509)
>   at java.lang.Class.getDeclaredField(Class.java:1959)
>   at 
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
>   at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
>   at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369)
> {noformat}
> It is because the HBase classes (org.apache.hadoop.hbase.\*) meet the system 
> classes criteria which are supposed to be loaded strictly from the base 
> classloader. But hadoop does not provide HBase as a dependency.
> We should exclude the HBase classes from the system classes until/unless 
> HBase is provided by a future version of hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-08 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137937#comment-15137937
 ] 

Chris Nauroth commented on HADOOP-12666:


[~vishwajeet.dusane] et al, thank you for the contribution.  I'd like to start 
by discussing a few things that we would commonly ask of any contribution of a 
new Haodop-compatible file system:
# A new feature contribution requires end user documentation.  For inspiration, 
you can take a look at the existing documentation for 
[S3|http://hadoop.apache.org/docs/r2.7.2/hadoop-aws/tools/hadoop-aws/index.html],
 [Azure|http://hadoop.apache.org/docs/r2.7.2/hadoop-azure/index.html] and 
[Swift|http://hadoop.apache.org/docs/r2.7.2/hadoop-openstack/index.html].  The 
source for these pages is in the Hadoop code as Markdown, so you can look for 
*.md to find examples.  It's good to discuss how the semantics of this file 
system will be the same as the semantics of HDFS and what will differ/work 
differently/not work at all.  One particular point worth documenting is 
authentication, since there is a reliance on the WebHDFS OAuth feature here.
# Are there any tests that actually integrate with the back-end service, or is 
everything using a mock server right now?  We've seen in the past that real 
integration testing is important.  The mock-based testing is valuable too, 
because Apache Jenkins won't have the capability to run against the live 
service.  A useful setup has been to have a common set of tests that support 
running in both "mock mode" and "live mode".  For an example of this, see the 
hadoop-azure test suites.
# We have a set of [file system contract 
tests|http://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/filesystem/testing.html]
 that exercise the expected semantics of file systems.  We ask that new file 
system implementations implement this contract test suite.  This is also a 
great way to see how closely a file system matches the semantics of HDFS.  
hadoop-azure does not yet run these tests, but HADOOP-12535 has a proposed 
patch to start doing so.

I scanned through the code.  This is not a comprehensive review, but there are 
a few significant things that I've spotted so far:
# Some classes are using {{com.microsoft.azure.datalake}} as the root of the 
package name.  All package names must use the standard {{org.apache.hadoop}} 
prefix instead.
# Can you please add JavaDoc comments at the top of 
{{PrivateAzureDataLakeFileSystem}} and {{ADLFileSystem}} to describe the design 
intent behind splitting the implementation classes like this?  A potential 
source of confusion is that {{PrivateAzureDataLakeFileSystem}} is using the 
{{public}} Java access modifier, so the "private" in the class name is 
unrelated to Java language visibility semantics.
# {{CachedRefreshTokenBasedAccessTokenProvider}}: The constructor might end up 
creating multiple instances.  There is no locking around creation of the shared 
{{instance}}.
# {{PrivateDebugAzureDataLakeFileSystem}}/{{PrivateDebugAzureDataLake}}: It is 
unusual to control level of logging by swapping out the {{FileSystem}} 
implementation.  Users will expect that logging level is controlled the 
standard way through Log4J configuration.  Can we eliminate these classes and 
move appropriate logging at debug and trace level directly into 
{{PrivateAzureDataLakeFileSystem}}?
# {{ADLLogger}}: With consideration of the above, I'm not sure that this class 
adds any value over directly calling an SLF4J {{Log}} instance from the 
relevant call sites.  What do you think of dropping this class and converting 
call sites to call SLF4J directly?
# {{ADLConfKeys}}: There is an established precedent in Hadoop configuration 
for using all lower-case letters in configuration keys, delimited by '.' for 
separation of words.  I recommend staying consistent with that pattern.
# You are treading new ground in choosing to cache {{FileStatus}} instances 
locally on the client side to achieve performance.  This seems kind of like the 
Linux concept of the dentry cache.  The tricky thing is cache coherency across 
mutliple processes running in the cluster and consistency semantics.  I don't 
have a specific example of something that could break, but this seems like 
there could be a risk of something in the ecosystem breaking because of this.  
No other Hadoop-compatible file system works like this.  This also relates back 
to my earlier point about documentation of expected semantics.
# {{ADLConfKeys#LOG_VERSION}}: I'm not sure what this is meant to be.  Is the 
goal to pass the current build version number to the back end?  If so, then 
consider looking at {{org.apache.hadoop.util.VersionInfo}}, which is already 
tied into the Apache release process.  This version string doesn't need to be 
updated manually.
# {{TestAUploadBenchmark}}/{{TestADownloadBenchmark}}: From the 
comments, it sounds like the 

[jira] [Commented] (HADOOP-12773) HBase classes fail to load with client/job classloader enabled

2016-02-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12773?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137852#comment-15137852
 ] 

Hudson commented on HADOOP-12773:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9263 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9263/])
HADOOP-12773. HBase classes fail to load with client/job classloader (sjlee: 
rev 58acbf940a92ef8a761208a7a743175ee7b3377d)
* 
hadoop-common-project/hadoop-common/src/main/resources/org.apache.hadoop.application-classloader.properties
* hadoop-common-project/hadoop-common/CHANGES.txt


> HBase classes fail to load with client/job classloader enabled
> --
>
> Key: HADOOP-12773
> URL: https://issues.apache.org/jira/browse/HADOOP-12773
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Affects Versions: 2.6.3
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.8.0, 2.7.3, 2.9.0, 2.6.5
>
> Attachments: HADOOP-12773.01.patch
>
>
> Currently if a user uses HBase and enables the client/job classloader, the 
> job fails to load HBase classes. For example,
> {noformat}
> java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hbase/client/HBaseAdmin;
>   at java.lang.Class.getDeclaredFields0(Native Method)
>   at java.lang.Class.privateGetDeclaredFields(Class.java:2509)
>   at java.lang.Class.getDeclaredField(Class.java:1959)
>   at 
> java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1703)
>   at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:484)
>   at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:472)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at java.io.ObjectStreamClass.(ObjectStreamClass.java:472)
>   at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:369)
> {noformat}
> It is because the HBase classes (org.apache.hadoop.hbase.\*) meet the system 
> classes criteria which are supposed to be loaded strictly from the base 
> classloader. But hadoop does not provide HBase as a dependency.
> We should exclude the HBase classes from the system classes until/unless 
> HBase is provided by a future version of hadoop.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Attachment: HADOOP-12782.001.patch

Rev01: implemented fast ldap group name lookup, the associated test case, and 
associated documentation.

In this implementation, there are basically three cases: 
# general scenario, perform two ldap queries per group lookup.
# If the server supports posix semantics, perform two ldap queries using posix 
gid/uid to find groups of the user
# (new implementation) perform one ldap query per group lookup, if fast lookup 
is enabled (The server must be an Active Directory, no recursive group 
membership and use CN attribute to identify a group's name)

To enable this feature, set 
hadoop.security.group.mapping.ldap.search.filter.group=ldapFastLookup.

I also updated the first two scenarios so that more verbose message will be 
logged in case of exceptions. (supportability)

Finally, a new test file TestLdapGroupsMappingWithFastLookup is added that 
tests the new feature. The test (as well as TestLdapGroupsMapping and 
TestLdapGroupsMappingWithPosixGroup) all passed locally.

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-08 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137927#comment-15137927
 ] 

Aaron Fabbri commented on HADOOP-12666:
---

{code}
+  /**
+   * Constructor.
+   */  
  
+  public CachedRefreshTokenBasedAccessTokenProvider() {
  
+super();   
  
+if (instance == null) {
+  instance = new ConfRefreshTokenBasedAccessTokenProvider();   
  
+}  
  
+  }
{code}

You can omit call to super() here.

Same thing in PrivateDebugAzureDataLakeFileSystem()

{code}
+package com.microsoft.azure.datalake.store;
+
+import org.apache.hadoop.hdfs.web.PrivateAzureDataLakeFileSystem;
+
+class AdlFileSystem extends PrivateAzureDataLakeFileSystem {
+
{code}
Why is {{PrivateAzureDataLakeFileSystem}} public?

More importantly, shouldn't you move all the code into 
org.apache.hadoop.fs.azure?  As is, it is spread between 
{{com.microsoft.azure}}, {{org.apache.hadoop.fs.azure}}, and 
{{org.apache.hadoop.hdfs.web}}.

{code}
+package org.apache.hadoop.hdfs.web;
+
+/**
+ * Constants.
+ */
+public final class ADLConfKeys {
+  public static final String
+  ADL_FEATURE_CONCURRENT_READ_AHEAD_MAX_CONCURRENT_CONN =
+  "ADL.Feature.Override.ReadAhead.MAX.Concurrent.Connection";
+  public static final int
+  ADL_FEATURE_CONCURRENT_READ_AHEAD_MAX_CONCURRENT_CONN_DEFAULT = 2;
+  public static final String ADL_EVENTS_TRACKING_SOURCE =
+  "adl.events.tracking.source";
+  public static final String ADL_EVENTS_TRACKING_CLUSTERNAME =
+  "adl.events.tracking.clustername";
+  public static final String ADL_TRACKING_JOB_ID = "adl.tracking.job.id";
{code}

Please be consistent with all lowercase config names, and document them in 
{core-default.xml}.

Need to run.. more comments later.



> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12666) Support Microsoft Azure Data Lake - as a file system in Hadoop

2016-02-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12666?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15137935#comment-15137935
 ] 

Lei (Eddy) Xu commented on HADOOP-12666:


Hey, [~vishwajeet.dusane] 

Thanks for working on this nice patch.

Have a few questions, 

* {code:title=FileStatusCacheManager.java}
 * ACID properties are maintained in overloaded api in @see
 * PrivateAzureDataLakeFileSystem class.
{code}

* You mentioned in the above comments. But {{PrivateAzureDataLakeFileSystem}} 
does not call it within synchronized calls (e.g., 
{{PrivateAzureDataLakeFileSystem#create}}.  Although {{syncMap}} is a 
{{synchronizedMap}}, {{putFileStatus}} has multiple operations on {{syncMap}}, 
which can not guarantee atomicity.

* It might be a better idea to provide atomicity in 
{{PrivateAzureDataLakeFileSystem}}. A couple of places have multiple cache 
calls within the same function (e.g., {{rename()}}).

* It might be a good idea to rename {{FileStatusCacheManager#getFileStatus, 
putFileStatus, removeFileStatus}} to {{get/put/remove}}, because the class name 
already clearly indicates the context.

* {{FileStatusCacheObject}} can only store an absolute expiration time. And its 
methods can be package-level methods.

* I saw a few places, e.g., {{PrivateAzureDataLakeFileSystem#rename/delete}}, 
that clear the cache if the param is a directory. Could you justify the reason 
behind this? Would it cause noticeable performance degradation?  Or as an 
alternative, using LinkedList + TreeMap for FileStatusCacheManager?

* One general question, is this FileStatusCacheManager in {{HdfsClient}}? If it 
is the case, how do you make them consistent across clients on multiple nodes?

* Similar to above question, could you provide a reference architecture of how 
to run a cluster on Azure Data Lake?

* {code}
   if (b == null) {
  throw new NullPointerException();
} else if (off < 0 || len < 0 || len > b.length - off) {
  throw new IndexOutOfBoundsException();
} else if (len == 0) {
  return 0;
}
{code}

Can we use {{Precondtions}} here? It will be more descriptive. 

> Support Microsoft Azure Data Lake - as a file system in Hadoop
> --
>
> Key: HADOOP-12666
> URL: https://issues.apache.org/jira/browse/HADOOP-12666
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, tools
>Reporter: Vishwajeet Dusane
>Assignee: Vishwajeet Dusane
> Attachments: HADOOP-12666-002.patch, HADOOP-12666-003.patch, 
> HADOOP-12666-004.patch, HADOOP-12666-005.patch, HADOOP-12666-1.patch
>
>   Original Estimate: 336h
>  Time Spent: 336h
>  Remaining Estimate: 0h
>
> h2. Description
> This JIRA describes a new file system implementation for accessing Microsoft 
> Azure Data Lake Store (ADL) from within Hadoop. This would enable existing 
> Hadoop applications such has MR, HIVE, Hbase etc..,  to use ADL store as 
> input or output.
>  
> ADL is ultra-high capacity, Optimized for massive throughput with rich 
> management and security features. More details available at 
> https://azure.microsoft.com/en-us/services/data-lake-store/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-02-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12782:
-
Status: Patch Available  (was: Open)

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Attachments: HADOOP-12782.001.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)