[jira] [Updated] (HADOOP-14519) Client$Connection#waitForWork may suffer spurious wakeup

2017-06-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14519:

Attachment: HADOOP-14519.001.patch

Patch 001
* Convert the if block for wait() into a while loop
* Break out of the loop upon InterruptedException
* Not a good idea to swallow InterruptedException. At least restore the 
interrupted status.
* Don’t know how to write a unit test to catch the spurious wakeup


> Client$Connection#waitForWork may suffer spurious wakeup
> 
>
> Key: HADOOP-14519
> URL: https://issues.apache.org/jira/browse/HADOOP-14519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14519.001.patch
>
>
> {{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
> {{wait}} is not surrounded by a loop. See 
> [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].
> {code:title=Client$Connection#waitForWork}
>   if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
> long timeout = maxIdleTime-
>   (Time.now()-lastActivity.get());
> if (timeout>0) {
>   try {
> wait(timeout);  << spurious wakeup
>   } catch (InterruptedException e) {}
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14519) Client$Connection#waitForWork may suffer spurious wakeup

2017-06-09 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14519:

Status: Patch Available  (was: Open)

> Client$Connection#waitForWork may suffer spurious wakeup
> 
>
> Key: HADOOP-14519
> URL: https://issues.apache.org/jira/browse/HADOOP-14519
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.0
>Reporter: John Zhuge
>Assignee: John Zhuge
>Priority: Critical
> Attachments: HADOOP-14519.001.patch
>
>
> {{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
> {{wait}} is not surrounded by a loop. See 
> [https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].
> {code:title=Client$Connection#waitForWork}
>   if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
> long timeout = maxIdleTime-
>   (Time.now()-lastActivity.get());
> if (timeout>0) {
>   try {
> wait(timeout);  << spurious wakeup
>   } catch (InterruptedException e) {}
> }
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045402#comment-16045402
 ] 

Sean Busbey commented on HADOOP-14284:
--

One gap that I'm not certain has a jira is that there isn't an IT that runs a 
MR job on the YARN minicluster provided by the shaded version of the 
minicluster jar.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045401#comment-16045401
 ] 

Sean Busbey commented on HADOOP-14284:
--

we already have a shaded client for yarn/mr. there are some open issues against 
it, but fundamentally it works.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-09 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14518:
-
Description: 
WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
default value set by the Azure Client SDK, so Hadoop traffic doesn't appear any 
different from general Blob traffic. If we customize the User-Agent header, 
then it will enable better troubleshooting and analysis by Azure service.

The following configuration
  
fs.azure.user.agent.id
MSFT
  

set the user agent to 
 User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 (JavaJRE 
1.8.0_131; WindowsServer2012R2 6.3)


Test Results :
Tests run: 703, Failures: 0, Errors: 0, Skipped: 119

  was:
WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
default value set by the Azure Client SDK, so Hadoop traffic doesn't appear any 
different from general Blob traffic. If we customize the User-Agent header, 
then it will enable better troubleshooting and analysis by Azure service.

The following configuration
  
fs.azure.user.agent.id
MSFT
  

set the user agent to 
 User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 (JavaJRE 
1.8.0_131; WindowsServer2012R2 6.3)


> Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
> 
>
> Key: HADOOP-14518
> URL: https://issues.apache.org/jira/browse/HADOOP-14518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Priority: Minor
> Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt
>
>
> WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
> default value set by the Azure Client SDK, so Hadoop traffic doesn't appear 
> any different from general Blob traffic. If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by Azure 
> service.
> The following configuration
>   
> fs.azure.user.agent.id
> MSFT
>   
> set the user agent to 
>  User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 
> (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3)
> Test Results :
> Tests run: 703, Failures: 0, Errors: 0, Skipped: 119



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14429) FTPFileSystem#getFsAction always returns FsAction.NONE

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045393#comment-16045393
 ] 

Hadoop QA commented on HADOOP-14429:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
22s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} hadoop-common-project/hadoop-common generated 0 new 
+ 18 unchanged - 1 fixed = 18 total (was 19) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14429 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872403/HADOOP-14429-004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9900f6a5c90e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2121cb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12512/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12512/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12512/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> FTPFileSystem#getFsAction  always returns FsAction.NONE
> ---
>
> Key: HADOOP-14429
> URL: https://issues.apache.org/jira/browse/HADOOP-14429
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14429-001.patch, HADOOP-14429-002.patch, 
> HADOOP-14429-003.patch, HADOOP-14429-004.patch
>
>
>   

[jira] [Updated] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-09 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14518:
-
Attachment: HADOOP-14518-01-test.txt
HADOOP-14518-01.patch

> Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
> 
>
> Key: HADOOP-14518
> URL: https://issues.apache.org/jira/browse/HADOOP-14518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Priority: Minor
> Attachments: HADOOP-14518-01.patch, HADOOP-14518-01-test.txt
>
>
> WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
> default value set by the Azure Client SDK, so Hadoop traffic doesn't appear 
> any different from general Blob traffic. If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by Azure 
> service.
> The following configuration
>   
> fs.azure.user.agent.id
> MSFT
>   
> set the user agent to 
>  User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 
> (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-09 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14518:
-
Description: 
WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
default value set by the Azure Client SDK, so Hadoop traffic doesn't appear any 
different from general Blob traffic. If we customize the User-Agent header, 
then it will enable better troubleshooting and analysis by Azure service.

The following configuration
  
fs.azure.user.agent.id
MSFT
  

set the user agent to 
 User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 (JavaJRE 
1.8.0_131; WindowsServer2012R2 6.3)

  was:WASB passes a User-Agent header to the Azure back-end. Right now, it uses 
the default value set by the Azure Client SDK, so Hadoop traffic doesn't appear 
any different from general Blob traffic. If we customize the User-Agent header, 
then it will enable better troubleshooting and analysis by Azure service.


> Customize User-Agent header sent in HTTP/HTTPS requests by WASB.
> 
>
> Key: HADOOP-14518
> URL: https://issues.apache.org/jira/browse/HADOOP-14518
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>Priority: Minor
>
> WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
> default value set by the Azure Client SDK, so Hadoop traffic doesn't appear 
> any different from general Blob traffic. If we customize the User-Agent 
> header, then it will enable better troubleshooting and analysis by Azure 
> service.
> The following configuration
>   
> fs.azure.user.agent.id
> MSFT
>   
> set the user agent to 
>  User-Agent: WASB/3.0.0-alpha4-SNAPSHOT (MSFT) Azure-Storage/4.2.0 
> (JavaJRE 1.8.0_131; WindowsServer2012R2 6.3)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which will cause recursively list stuck

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Environment: 
ftp build by windows7 + Serv-U_64 12.1.0.8 
code runs any os

  was:
ftp build by windows7 + Serv-U_6412.1.0.8 
code runs any os


> FTPFileSystem#listStatus get currentPath and parentPath, which will cause 
> recursively list stuck
> 
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_64 12.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}
> {code}
> public void test() throws Exception{
> FTPFileSystem ftpFileSystem = new FTPFileSystem();
> ftpFileSystem.initialize(new 
> Path("ftp://test:123456@192.168.44.1/;).toUri(),
> new Configuration());
> FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
> for(FileStatus fileStatus1 : fileStatus)
>   System.out.println(fileStatus1);
> }
> {code}
> using test code below, the test results list below
> {code}
> FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
> isDirectory=false; length=2036; replication=1; blocksize=4096; 
> modification_time=149579778; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
> isDirectory=false; length=1322; replication=1; blocksize=4096; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
> isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
> group=group; permission=-; isSymlink=false}
> {code}
> In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} 
> is obviously the current Path, and 
> {{FileStatus{path=ftp://test:123456@192.168.44.1/;…… }} is obviously the 
> parent Path.
> So, if we want to walk the directory recursively, it will stuck.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure using openJDk 1.8.0

2017-06-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045378#comment-16045378
 ] 

Hongyuan Li edited comment on HADOOP-14486 at 6/10/17 4:23 AM:
---

[~ste...@apache.org] i have submit the patch and the test environment in near 
comment. Any time to review it ?


was (Author: hongyuan li):
Steve Loughran i have submit the patch and the test environment in near 
comment. Any time to review it ?

> TestSFTPFileSystem#testGetAccessTime test failure using openJDk 1.8.0  
> ---
>
> Key: HADOOP-14486
> URL: https://issues.apache.org/jira/browse/HADOOP-14486
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Hongyuan Li
> Attachments: HADOOP-14486-001.patch
>
>
> The TestSFTPFileSystem#testGetAccessTime test fails consistently with the 
> error below:
> {code}
> java.lang.AssertionError: expected:<1496496040072> but was:<149649604>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14429) FTPFileSystem#getFsAction always returns FsAction.NONE

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14429:
-
Summary: FTPFileSystem#getFsAction  always returns FsAction.NONE  (was: 
FTPFileSystem#getFsAction  always returned FsAction.NONE)

> FTPFileSystem#getFsAction  always returns FsAction.NONE
> ---
>
> Key: HADOOP-14429
> URL: https://issues.apache.org/jira/browse/HADOOP-14429
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14429-001.patch, HADOOP-14429-002.patch, 
> HADOOP-14429-003.patch, HADOOP-14429-004.patch
>
>
>   
> {code}
> private FsAction getFsAction(int accessGroup, FTPFile ftpFile) {
>   FsAction action = FsAction.NONE;
>   if (ftpFile.hasPermission(accessGroup, FTPFile.READ_PERMISSION)) {
>   action.or(FsAction.READ);
>   }
> if (ftpFile.hasPermission(accessGroup, FTPFile.WRITE_PERMISSION)) {
>   action.or(FsAction.WRITE);
> }
> if (ftpFile.hasPermission(accessGroup, FTPFile.EXECUTE_PERMISSION)) {
>   action.or(FsAction.EXECUTE);
> }
> return action;
>   }
> {code}
> from code above, we can see that the  getFsAction method doesnot modify the 
> action generated by FsAction action = FsAction.NONE,which means it return 
> FsAction.NONE all the time;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14429) FTPFileSystem#getFsAction always returned FsAction.NONE

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14429:
-
Summary: FTPFileSystem#getFsAction  always returned FsAction.NONE  (was: 
getFsAction method of FTPFileSystem  always returned FsAction.NONE)

> FTPFileSystem#getFsAction  always returned FsAction.NONE
> 
>
> Key: HADOOP-14429
> URL: https://issues.apache.org/jira/browse/HADOOP-14429
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14429-001.patch, HADOOP-14429-002.patch, 
> HADOOP-14429-003.patch, HADOOP-14429-004.patch
>
>
>   
> {code}
> private FsAction getFsAction(int accessGroup, FTPFile ftpFile) {
>   FsAction action = FsAction.NONE;
>   if (ftpFile.hasPermission(accessGroup, FTPFile.READ_PERMISSION)) {
>   action.or(FsAction.READ);
>   }
> if (ftpFile.hasPermission(accessGroup, FTPFile.WRITE_PERMISSION)) {
>   action.or(FsAction.WRITE);
> }
> if (ftpFile.hasPermission(accessGroup, FTPFile.EXECUTE_PERMISSION)) {
>   action.or(FsAction.EXECUTE);
> }
> return action;
>   }
> {code}
> from code above, we can see that the  getFsAction method doesnot modify the 
> action generated by FsAction action = FsAction.NONE,which means it return 
> FsAction.NONE all the time;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045379#comment-16045379
 ] 

Hadoop QA commented on HADOOP-14455:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
30s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
42s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
47s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 10m 47s{color} 
| {color:red} root generated 2 new + 787 unchanged - 0 fixed = 789 total (was 
787) {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 2 new + 263 unchanged 
- 3 fixed = 265 total (was 266) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
2s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 63m 54s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
43s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}155m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.server.namenode.ha.TestDFSUpgradeWithHA |
|   | hadoop.hdfs.server.namenode.ha.TestInitializeSharedEdits |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14455 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872396/HADOOP-14455-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c193e91c69dc 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / a2121cb |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12511/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| javac | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12511/artifact/patchprocess/diff-compile-javac-root.txt
 |
| checkstyle | 

[jira] [Commented] (HADOOP-14486) TestSFTPFileSystem#testGetAccessTime test failure using openJDk 1.8.0

2017-06-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14486?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045378#comment-16045378
 ] 

Hongyuan Li commented on HADOOP-14486:
--

Steve Loughran i have submit the patch and the test environment in near 
comment. Any time to review it ?

> TestSFTPFileSystem#testGetAccessTime test failure using openJDk 1.8.0  
> ---
>
> Key: HADOOP-14486
> URL: https://issues.apache.org/jira/browse/HADOOP-14486
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0-alpha4
> Environment: Ubuntu 14.04 
> x86, ppc64le
> $ java -version
> openjdk version "1.8.0_111"
> OpenJDK Runtime Environment (build 1.8.0_111-8u111-b14-3~14.04.1-b14)
> OpenJDK 64-Bit Server VM (build 25.111-b14, mixed mode)
>Reporter: Sonia Garudi
>Assignee: Hongyuan Li
> Attachments: HADOOP-14486-001.patch
>
>
> The TestSFTPFileSystem#testGetAccessTime test fails consistently with the 
> error below:
> {code}
> java.lang.AssertionError: expected:<1496496040072> but was:<149649604>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:743)
>   at org.junit.Assert.assertEquals(Assert.java:118)
>   at org.junit.Assert.assertEquals(Assert.java:555)
>   at org.junit.Assert.assertEquals(Assert.java:542)
>   at 
> org.apache.hadoop.fs.sftp.TestSFTPFileSystem.testGetAccessTime(TestSFTPFileSystem.java:319)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14470) CommandWithDestination#create used redundant ternary operator

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14470:
-
Summary: CommandWithDestination#create used redundant ternary operator
(was: redundant ternary operator  in create method of class 
CommandWithDestination)

> CommandWithDestination#create used redundant ternary operator  
> ---
>
> Key: HADOOP-14470
> URL: https://issues.apache.org/jira/browse/HADOOP-14470
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14470-001.patch
>
>
> in if statement,the lazyPersist  is always true, thus the ternary operator is 
> redundant,
> {{lazyPersist == true}} in if statment, so {{lazyPersist ? 1 : 
> getDefaultReplication(item.path)}} is redundant.
>   related code like below, which is in 
> {{org.apache.hadoop.fs.shell.CommandWithDestination}}  lineNumber : 504 :
> {code:java}
>FSDataOutputStream create(PathData item, boolean lazyPersist,
> boolean direct)
> throws IOException {
>   try {
> if (lazyPersist) { // in if stament, lazyPersist is always true
>   ……
>   return create(item.path,
> FsPermission.getFileDefault().applyUMask(
> FsPermission.getUMask(getConf())),
> createFlags,
> getConf().getInt(IO_FILE_BUFFER_SIZE_KEY,
> IO_FILE_BUFFER_SIZE_DEFAULT),
> lazyPersist ? 1 : getDefaultReplication(item.path), 
> // *this is redundant*
> getDefaultBlockSize(),
> null,
> null);
> } else {
>   return create(item.path, true);
> }
>   } finally { // might have been created but stream was interrupted
> if (!direct) {
>   deleteOnExit(item.path);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14429) getFsAction method of FTPFileSystem always returned FsAction.NONE

2017-06-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045374#comment-16045374
 ] 

Hongyuan Li commented on HADOOP-14429:
--

[~yzhangal] resubmit the patch as you have said.

> getFsAction method of FTPFileSystem  always returned FsAction.NONE
> --
>
> Key: HADOOP-14429
> URL: https://issues.apache.org/jira/browse/HADOOP-14429
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14429-001.patch, HADOOP-14429-002.patch, 
> HADOOP-14429-003.patch, HADOOP-14429-004.patch
>
>
>   
> {code}
> private FsAction getFsAction(int accessGroup, FTPFile ftpFile) {
>   FsAction action = FsAction.NONE;
>   if (ftpFile.hasPermission(accessGroup, FTPFile.READ_PERMISSION)) {
>   action.or(FsAction.READ);
>   }
> if (ftpFile.hasPermission(accessGroup, FTPFile.WRITE_PERMISSION)) {
>   action.or(FsAction.WRITE);
> }
> if (ftpFile.hasPermission(accessGroup, FTPFile.EXECUTE_PERMISSION)) {
>   action.or(FsAction.EXECUTE);
> }
> return action;
>   }
> {code}
> from code above, we can see that the  getFsAction method doesnot modify the 
> action generated by FsAction action = FsAction.NONE,which means it return 
> FsAction.NONE all the time;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14429) getFsAction method of FTPFileSystem always returned FsAction.NONE

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14429?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14429:
-
Attachment: HADOOP-14429-004.patch

> getFsAction method of FTPFileSystem  always returned FsAction.NONE
> --
>
> Key: HADOOP-14429
> URL: https://issues.apache.org/jira/browse/HADOOP-14429
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha2
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
>Priority: Trivial
> Attachments: HADOOP-14429-001.patch, HADOOP-14429-002.patch, 
> HADOOP-14429-003.patch, HADOOP-14429-004.patch
>
>
>   
> {code}
> private FsAction getFsAction(int accessGroup, FTPFile ftpFile) {
>   FsAction action = FsAction.NONE;
>   if (ftpFile.hasPermission(accessGroup, FTPFile.READ_PERMISSION)) {
>   action.or(FsAction.READ);
>   }
> if (ftpFile.hasPermission(accessGroup, FTPFile.WRITE_PERMISSION)) {
>   action.or(FsAction.WRITE);
> }
> if (ftpFile.hasPermission(accessGroup, FTPFile.EXECUTE_PERMISSION)) {
>   action.or(FsAction.EXECUTE);
> }
> return action;
>   }
> {code}
> from code above, we can see that the  getFsAction method doesnot modify the 
> action generated by FsAction action = FsAction.NONE,which means it return 
> FsAction.NONE all the time;



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which will cause recursively list stuck

2017-06-09 Thread Hongyuan Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045372#comment-16045372
 ] 

Hongyuan Li commented on HADOOP-14469:
--

[~ste...@apache.org] added test code and environment.

> FTPFileSystem#listStatus get currentPath and parentPath, which will cause 
> recursively list stuck
> 
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_6412.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}
> {code}
> public void test() throws Exception{
> FTPFileSystem ftpFileSystem = new FTPFileSystem();
> ftpFileSystem.initialize(new 
> Path("ftp://test:123456@192.168.44.1/;).toUri(),
> new Configuration());
> FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
> for(FileStatus fileStatus1 : fileStatus)
>   System.out.println(fileStatus1);
> }
> {code}
> using test code below, the test results list below
> {code}
> FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
> isDirectory=false; length=2036; replication=1; blocksize=4096; 
> modification_time=149579778; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
> isDirectory=false; length=1322; replication=1; blocksize=4096; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> permission=-; isSymlink=false}
> FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
> isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
> group=group; permission=-; isSymlink=false}
> {code}
> In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} 
> is obviously the current Path, and 
> {{FileStatus{path=ftp://test:123456@192.168.44.1/;…… }} is obviously the 
> parent Path.
> So, if we want to walk the directory recursively, it will stuck.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which will cause recursively list stuck

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Description: 
for some ftpsystems, liststatus method will return new Path(".") and new 
Path(".."), thus causing list op looping.for example, Serv-U
We can see the logic in code below:

{code}
  private FileStatus[] listStatus(FTPClient client, Path file)
  throws IOException {
……
FileStatus[] fileStats = new FileStatus[ftpFiles.length];
for (int i = 0; i < ftpFiles.length; i++) {
  fileStats[i] = getFileStatus(ftpFiles[i], absolute);
}
return fileStats;
  }
{code}


{code}
public void test() throws Exception{
FTPFileSystem ftpFileSystem = new FTPFileSystem();
ftpFileSystem.initialize(new 
Path("ftp://test:123456@192.168.44.1/;).toUri(),
new Configuration());
FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
for(FileStatus fileStatus1 : fileStatus)
  System.out.println(fileStatus1);
}
{code}
using test code below, the test results list below
{code}
FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/; isDirectory=true; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop; isDirectory=true; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14431-002.patch; 
isDirectory=false; length=2036; replication=1; blocksize=4096; 
modification_time=149579778; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/HADOOP-14486-001.patch; 
isDirectory=false; length=1322; replication=1; blocksize=4096; 
modification_time=149671698; access_time=0; owner=user; group=group; 
permission=-; isSymlink=false}
FileStatus{path=ftp://test:123456@192.168.44.1/new/hadoop-main; 
isDirectory=true; modification_time=149579712; access_time=0; owner=user; 
group=group; permission=-; isSymlink=false}
{code}
In results above, {{FileStatus{path=ftp://test:123456@192.168.44.1/new; ……}} is 
obviously the current Path, and 
{{FileStatus{path=ftp://test:123456@192.168.44.1/;…… }} is obviously the parent 
Path.
So, if we want to walk the directory recursively, it will stuck.

  was:
for some ftpsystems, liststatus method will return new Path(".") and new 
Path(".."), thus causing list op looping.for example, Serv-U
We can see the logic in code below:

{code}
  private FileStatus[] listStatus(FTPClient client, Path file)
  throws IOException {
……
FileStatus[] fileStats = new FileStatus[ftpFiles.length];
for (int i = 0; i < ftpFiles.length; i++) {
  fileStats[i] = getFileStatus(ftpFiles[i], absolute);
}
return fileStats;
  }
{code}


> FTPFileSystem#listStatus get currentPath and parentPath, which will cause 
> recursively list stuck
> 
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_6412.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}
> {code}
> public void test() throws Exception{
> FTPFileSystem ftpFileSystem = new FTPFileSystem();
> ftpFileSystem.initialize(new 
> Path("ftp://test:123456@192.168.44.1/;).toUri(),
> new Configuration());
> FileStatus[] fileStatus  = ftpFileSystem.listStatus(new Path("/new"));
> for(FileStatus fileStatus1 : fileStatus)
>   System.out.println(fileStatus1);
> }
> {code}
> using test code below, the test results list below
> {code}
> FileStatus{path=ftp://test:123456@192.168.44.1/new; isDirectory=true; 
> modification_time=149671698; access_time=0; owner=user; group=group; 
> 

[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which will cause recursively list stuck

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Summary: FTPFileSystem#listStatus get currentPath and parentPath, which 
will cause recursively list stuck  (was: FTPFileSystem#listStatus get 
currentPath and parentPath, which is wrong)

> FTPFileSystem#listStatus get currentPath and parentPath, which will cause 
> recursively list stuck
> 
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_6412.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which is wrong

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Environment: 
ftp build by windows7 + Serv-U_6412.1.0.8 
code runs any os

> FTPFileSystem#listStatus get currentPath and parentPath, which is wrong
> ---
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
> Environment: ftp build by windows7 + Serv-U_6412.1.0.8 
> code runs any os
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14469) FTPFileSystem#listStatus get currentPath and parentPath, which is wrong

2017-06-09 Thread Hongyuan Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hongyuan Li updated HADOOP-14469:
-
Summary: FTPFileSystem#listStatus get currentPath and parentPath, which is 
wrong  (was: the listStatus method of FTPFileSystem should filter the path "."  
and "..")

> FTPFileSystem#listStatus get currentPath and parentPath, which is wrong
> ---
>
> Key: HADOOP-14469
> URL: https://issues.apache.org/jira/browse/HADOOP-14469
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Hongyuan Li
>Assignee: Hongyuan Li
> Attachments: HADOOP-14469-001.patch, HADOOP-14469-002.patch, 
> HADOOP-14469-003.patch
>
>
> for some ftpsystems, liststatus method will return new Path(".") and new 
> Path(".."), thus causing list op looping.for example, Serv-U
> We can see the logic in code below:
> {code}
>   private FileStatus[] listStatus(FTPClient client, Path file)
>   throws IOException {
> ……
> FileStatus[] fileStats = new FileStatus[ftpFiles.length];
> for (int i = 0; i < ftpFiles.length; i++) {
>   fileStats[i] = getFileStatus(ftpFiles[i], absolute);
> }
> return fileStats;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14497) Logs for KMS delegation token lifecycle

2017-06-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen resolved HADOOP-14497.

   Resolution: Done
Fix Version/s: 3.0.0-alpha4
   2.9.0

All sub-tasks are done and the items mentioned in the description is complete. 
Specifically:
#1 is improved by subtask 4
#4 is added by subtask 2
#2 and #3 already exists.

So I'm closing this jira as done. Thank you for reporting, [~yzhangal], and 
feel free to reopen of comment if you think there's anything else we should do 
here!

> Logs for KMS delegation token lifecycle
> ---
>
> Key: HADOOP-14497
> URL: https://issues.apache.org/jira/browse/HADOOP-14497
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yongjun Zhang
>Assignee: Xiao Chen
> Fix For: 2.9.0, 3.0.0-alpha4
>
>
> We run into quite some customer cases about authentication failures related 
> to KMS delegation token. It would be nice to see a log for each stage of the 
> token:
> 1. creation
> 2. renewal
> 3. removal upon cancel
> 4. remove upon expiration
> So that when we correlate the logs for the same DT, we can have a good 
> picture about what's going on, and what could have caused the authentication 
> failure.
> The same is applicable to other delegation tokens.
> NOTE: When log info about delagation token, we don't want leak user's secret 
> info.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14519) Client$Connection#waitForWork may suffer spurious wakeup

2017-06-09 Thread John Zhuge (JIRA)
John Zhuge created HADOOP-14519:
---

 Summary: Client$Connection#waitForWork may suffer spurious wakeup
 Key: HADOOP-14519
 URL: https://issues.apache.org/jira/browse/HADOOP-14519
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.8.0
Reporter: John Zhuge
Assignee: John Zhuge
Priority: Critical


{{Client$Connection#waitForWork}} may suffer spurious wakeup because the 
{{wait}} is not surrounded by a loop. See 
[https://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#wait()].

{code:title=Client$Connection#waitForWork}
  if (calls.isEmpty() && !shouldCloseConnection.get() && running.get())  {
long timeout = maxIdleTime-
  (Time.now()-lastActivity.get());
if (timeout>0) {
  try {
wait(timeout);  << spurious wakeup
  } catch (InterruptedException e) {}
}
  }
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14513) A little performance improvement of HarFileSystem

2017-06-09 Thread hu xiaodong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045358#comment-16045358
 ] 

hu xiaodong commented on HADOOP-14513:
--

Hi [~raviprak]!
 sorry ,I have no benchmarks / profiles. And I don't know  if the JVM  
already optimize it.  
 If the JVM has already optimize it. I will close the issue.
Thanks.

> A little performance improvement of HarFileSystem
> -
>
> Key: HADOOP-14513
> URL: https://issues.apache.org/jira/browse/HADOOP-14513
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HADOOP-14513.001.patch
>
>
> In the Java source of HarFileSystem.java:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // I think p.depth() need not be loop many times, depth() is a complex 
> calculation
> for (int i=0; i< p.depth(); i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}
>  
> I think the fellow is more suitable:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // just loop once
> for (int i=0,depth=p.depth(); i< depth; i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14518) Customize User-Agent header sent in HTTP/HTTPS requests by WASB.

2017-06-09 Thread Georgi Chalakov (JIRA)
Georgi Chalakov created HADOOP-14518:


 Summary: Customize User-Agent header sent in HTTP/HTTPS requests 
by WASB.
 Key: HADOOP-14518
 URL: https://issues.apache.org/jira/browse/HADOOP-14518
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.0.0-alpha3
Reporter: Georgi Chalakov
Priority: Minor


WASB passes a User-Agent header to the Azure back-end. Right now, it uses the 
default value set by the Azure Client SDK, so Hadoop traffic doesn't appear any 
different from general Blob traffic. If we customize the User-Agent header, 
then it will enable better troubleshooting and analysis by Azure service.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045343#comment-16045343
 ] 

Lei (Eddy) Xu commented on HADOOP-14394:


{{.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy}} fails on trunk, 
reported in HDFS-11964.

The others failures are not relevant / flaky, and passed on my laptop.

> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch, HADOOP-14394.04.patch, 
> HADOOP-14394.05.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14517) Fix TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure

2017-06-09 Thread Lei (Eddy) Xu (JIRA)
Lei (Eddy) Xu created HADOOP-14517:
--

 Summary: Fix 
TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure failure
 Key: HADOOP-14517
 URL: https://issues.apache.org/jira/browse/HADOOP-14517
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0-alpha3
Reporter: Lei (Eddy) Xu


TestDFSStripedInputStreamWithRandomECPolicy#testPreadWithDNFailure fails on 
trunk:


{code}
Running org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
Tests run: 5, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 10.99 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy
testPreadWithDNFailure(org.apache.hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy)
  Time elapsed: 1.265 sec  <<< FAILURE!
org.junit.internal.ArrayComparisonFailure: arrays first differed at element 
[327680]; expected:<-36> but was:<2>
at 
org.junit.internal.ComparisonCriteria.arrayEquals(ComparisonCriteria.java:50)
at org.junit.Assert.internalArrayEquals(Assert.java:473)
at org.junit.Assert.assertArrayEquals(Assert.java:294)
at org.junit.Assert.assertArrayEquals(Assert.java:305)
at 
org.apache.hadoop.hdfs.TestDFSStripedInputStream.testPreadWithDNFailure(TestDFSStripedInputStream.java:306)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14455) ViewFileSystem#rename should support be supported within same nameservice with different mountpoints

2017-06-09 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-14455:
--
Attachment: HADOOP-14455-002.patch

Uploaded the patch to address the {{testfailures}} and {{check-style}} issues.

> ViewFileSystem#rename should support be supported within same nameservice 
> with different mountpoints
> 
>
> Key: HADOOP-14455
> URL: https://issues.apache.org/jira/browse/HADOOP-14455
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: viewfs
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Attachments: HADOOP-14455-002.patch, HADOOP-14455.patch
>
>
> *Scenario:* 
> || Mount Point || NameService|| Value||
> |/tmp|hacluster|/tmp|
> |/user|hacluster|/user|
> Move file from {{/tmp}} to {{/user}}
> It will fail by throwing the following error
> {noformat}
> Caused by: java.io.IOException: Renames across Mount points not supported
> at 
> org.apache.hadoop.fs.viewfs.ViewFileSystem.rename(ViewFileSystem.java:500)
> at org.apache.hadoop.hive.ql.metadata.Hive.moveFile(Hive.java:2692)
> ... 22 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13454) S3Guard: Provide custom FileSystem Statistics.

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13454:
--

Assignee: (was: Mingliang Liu)

> S3Guard: Provide custom FileSystem Statistics.
> --
>
> Key: HADOOP-13454
> URL: https://issues.apache.org/jira/browse/HADOOP-13454
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Chris Nauroth
>
> Provide custom {{FileSystem}} {{Statistics}} with information about the 
> internal operational details of S3Guard.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13761) S3Guard: implement retries

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-13761:
--

Assignee: (was: Mingliang Liu)

> S3Guard: implement retries 
> ---
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Aaron Fabbri
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14490) Upgrade azure-storage sdk version >5.2.0

2017-06-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045246#comment-16045246
 ] 

Mingliang Liu commented on HADOOP-14490:


>From [HADOOP-14516]:
{quote}
Azure Storage Clients changes between 4.2 and 5.2:

- Fixed Exists() calls on Shares and Directories to now populate metadata. This 
was already being done for Files.
- Changed blob constants to support up to 256 MB on put blob for block blobs. 
The default value for put blob threshold has also been updated to half of the 
maximum, or 128 MB currently.
- Fixed a bug that prevented setting content MD5 to true when creating a new 
file.
- Fixed a bug where access conditions, options, and operation context were not 
being passed when calling openWriteExisting() on a page blob or a file.
- Fixed a bug where an exception was being thrown on a range get of a blob or 
file when the options disableContentMD5Validation is set to false and 
useTransactionalContentMD5 is set to true and there is no overall MD5.
- Fixed a bug where retries were happening immediately if a socket exception 
was thrown.
- In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
This check has been moved to create() and uploadProperties() in CloudFileShare.
- Prefix support for listing files and directories.
- Added support for setting public access when creating a blob container
- The public access setting on a blob container is now a container property 
returned from downloadProperties.
- Add Message now modifies the PopReceipt, Id, NextVisibleTime, InsertionTime, 
and ExpirationTime properties of its CloudQueueMessage parameter.
- Populate content MD5 for range gets on Blobs and Files.
- Added support in Page Blob for incremental copy.
- Added large BlockBlob upload support. Blocks can now support sizes up to 100 
MB.
- Added a new, memory-optimized upload strategy for the upload* APIs. This 
algorithm only applies for blocks greater than 4MB and when storeBlobContentMD5 
and Client-Side Encryption are disabled.
- getQualifiedUri() has been deprecated for Blobs. Please use 
getSnapshotQualifiedUri() instead. This new function will return the blob 
including the snapshot (if present) and no SAS token.
- getQualifiedStorageUri() has been deprecated for Blobs. Please use 
getSnapshotQualifiedStorageUri() instead. This new function will return the 
blob including the snapshot (if present) and no SAS token.
- Fixed a bug where copying from a blob that included a SAS token and a 
snapshot ommitted the SAS token.
- Fixed a bug in client-side encryption for tables that was preventing the Java 
client from decrypting entities encrypted with the .NET client, and vice versa.
- Added support for server-side encryption.
- Added support for getBlobReferenceFromServer methods on CloudBlobContainer to 
support retrieving a blob without knowing its type.
- Fixed a bug in the retry policies where 300 status codes were being retried 
when they shouldn't be.
{quote}

> Upgrade azure-storage sdk version >5.2.0
> 
>
> Key: HADOOP-14490
> URL: https://issues.apache.org/jira/browse/HADOOP-14490
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Reporter: Mingliang Liu
>Assignee: Rajesh Balamohan
>
> As required by [HADOOP-14478], we're expecting the {{BlobInputStream}} to 
> support advanced {{readFully()}} by taking hints of mark. This can only be 
> done by means of sdk version bump.
> cc: [~rajesh.balamohan].



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3) Output directories are not cleaned up before the reduces run

2017-06-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045248#comment-16045248
 ] 

Hudson commented on HADOOP-3:
-

FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3166 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3166/])
HBASE-18033 Fix license check for hadoop-3.x (busbey: rev 
a6216db16f7fa3342f9ab16a52b46270aea5b4ae)
* (edit) hbase-resource-bundle/src/main/resources/META-INF/LICENSE.vm
* (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml


> Output directories are not cleaned up before the reduces run
> 
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Minor
> Fix For: 0.1.0
>
> Attachments: clean-out-dir.patch, noclobber.patch
>
>
> The output directory for the reduces is not cleaned up and therefore if you 
> can see left overs from previous runs, if they had more reduces. For example, 
> if you run the application once with reduces=10 and then rerun with 
> reduces=8, your output directory will have frag0 to frag9 with the 
> first 8 fragments from the second run and the last 2 fragments from the first 
> run.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14503) Make RollingAverages a mutable metric

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045244#comment-16045244
 ] 

Hadoop QA commented on HADOOP-14503:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
25s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
9s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  8s{color} | {color:orange} root: The patch generated 4 new + 35 unchanged - 
1 fixed = 39 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m  
0s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  1m  0s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 94m  0s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14503 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872362/HADOOP-14503.004.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ae47c0f0dce6 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5578af8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12510/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12510/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12510/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12510/testReport/ |
| modules | C: hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12510/console |
| Powered by | Apache 

[jira] [Closed] (HADOOP-14516) Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure Storage Clients

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu closed HADOOP-14516.
--

> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-14516
> URL: https://issues.apache.org/jira/browse/HADOOP-14516
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>
> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients. We are currently using version 4.2.0 of the SDK.
> Azure Storage Clients changes between 4.2 and 5.2:
>  * Fixed Exists() calls on Shares and Directories to now populate metadata. 
> This was already being done for Files.
>  * Changed blob constants to support up to 256 MB on put blob for block 
> blobs. The default value for put blob threshold has also been updated to half 
> of the maximum, or 128 MB currently.
>  * Fixed a bug that prevented setting content MD5 to true when creating a new 
> file.
>  * Fixed a bug where access conditions, options, and operation context were 
> not being passed when calling openWriteExisting() on a page blob or a file.
>  * Fixed a bug where an exception was being thrown on a range get of a blob 
> or file when the options disableContentMD5Validation is set to false and 
> useTransactionalContentMD5 is set to true and there is no overall MD5.
>  * Fixed a bug where retries were happening immediately if a socket exception 
> was thrown.
>  * In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
> This check has been moved to create() and uploadProperties() in 
> CloudFileShare.
>  * Prefix support for listing files and directories.
>  * Added support for setting public access when creating a blob container
>  * The public access setting on a blob container is now a container property 
> returned from downloadProperties.
>  * Add Message now modifies the PopReceipt, Id, NextVisibleTime, 
> InsertionTime, and ExpirationTime properties of its CloudQueueMessage 
> parameter.
>  * Populate content MD5 for range gets on Blobs and Files.
>  * Added support in Page Blob for incremental copy.
>  * Added large BlockBlob upload support. Blocks can now support sizes up to 
> 100 MB.
>  * Added a new, memory-optimized upload strategy for the upload* APIs. This 
> algorithm only applies for blocks greater than 4MB and when 
> storeBlobContentMD5 and Client-Side Encryption are disabled.
>  * getQualifiedUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedUri() instead. This new function will return the blob 
> including the snapshot (if present) and no SAS token.
>  * getQualifiedStorageUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedStorageUri() instead. This new function will return the 
> blob including the snapshot (if present) and no SAS token.
>  * Fixed a bug where copying from a blob that included a SAS token and a 
> snapshot ommitted the SAS token.
>  * Fixed a bug in client-side encryption for tables that was preventing the 
> Java client from decrypting entities encrypted with the .NET client, and vice 
> versa.
>  * Added support for server-side encryption.
>  * Added support for getBlobReferenceFromServer methods on CloudBlobContainer 
> to support retrieving a blob without knowing its type.
>  * Fixed a bug in the retry policies where 300 status codes were being 
> retried when they shouldn't be.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14516) Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure Storage Clients

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu resolved HADOOP-14516.

Resolution: Duplicate

Closing as duplicates. Please see [HADOOP-14490] and comment there. Thanks,

> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-14516
> URL: https://issues.apache.org/jira/browse/HADOOP-14516
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>
> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients. We are currently using version 4.2.0 of the SDK.
> Azure Storage Clients changes between 4.2 and 5.2:
>  * Fixed Exists() calls on Shares and Directories to now populate metadata. 
> This was already being done for Files.
>  * Changed blob constants to support up to 256 MB on put blob for block 
> blobs. The default value for put blob threshold has also been updated to half 
> of the maximum, or 128 MB currently.
>  * Fixed a bug that prevented setting content MD5 to true when creating a new 
> file.
>  * Fixed a bug where access conditions, options, and operation context were 
> not being passed when calling openWriteExisting() on a page blob or a file.
>  * Fixed a bug where an exception was being thrown on a range get of a blob 
> or file when the options disableContentMD5Validation is set to false and 
> useTransactionalContentMD5 is set to true and there is no overall MD5.
>  * Fixed a bug where retries were happening immediately if a socket exception 
> was thrown.
>  * In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
> This check has been moved to create() and uploadProperties() in 
> CloudFileShare.
>  * Prefix support for listing files and directories.
>  * Added support for setting public access when creating a blob container
>  * The public access setting on a blob container is now a container property 
> returned from downloadProperties.
>  * Add Message now modifies the PopReceipt, Id, NextVisibleTime, 
> InsertionTime, and ExpirationTime properties of its CloudQueueMessage 
> parameter.
>  * Populate content MD5 for range gets on Blobs and Files.
>  * Added support in Page Blob for incremental copy.
>  * Added large BlockBlob upload support. Blocks can now support sizes up to 
> 100 MB.
>  * Added a new, memory-optimized upload strategy for the upload* APIs. This 
> algorithm only applies for blocks greater than 4MB and when 
> storeBlobContentMD5 and Client-Side Encryption are disabled.
>  * getQualifiedUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedUri() instead. This new function will return the blob 
> including the snapshot (if present) and no SAS token.
>  * getQualifiedStorageUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedStorageUri() instead. This new function will return the 
> blob including the snapshot (if present) and no SAS token.
>  * Fixed a bug where copying from a blob that included a SAS token and a 
> snapshot ommitted the SAS token.
>  * Fixed a bug in client-side encryption for tables that was preventing the 
> Java client from decrypting entities encrypted with the .NET client, and vice 
> versa.
>  * Added support for server-side encryption.
>  * Added support for getBlobReferenceFromServer methods on CloudBlobContainer 
> to support retrieving a blob without knowing its type.
>  * Fixed a bug in the retry policies where 300 status codes were being 
> retried when they shouldn't be.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14516) Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure Storage Clients

2017-06-09 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14516:
-
Status: Open  (was: Patch Available)

> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-14516
> URL: https://issues.apache.org/jira/browse/HADOOP-14516
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>
> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients. We are currently using version 4.2.0 of the SDK.
> Azure Storage Clients changes between 4.2 and 5.2:
>  * Fixed Exists() calls on Shares and Directories to now populate metadata. 
> This was already being done for Files.
>  * Changed blob constants to support up to 256 MB on put blob for block 
> blobs. The default value for put blob threshold has also been updated to half 
> of the maximum, or 128 MB currently.
>  * Fixed a bug that prevented setting content MD5 to true when creating a new 
> file.
>  * Fixed a bug where access conditions, options, and operation context were 
> not being passed when calling openWriteExisting() on a page blob or a file.
>  * Fixed a bug where an exception was being thrown on a range get of a blob 
> or file when the options disableContentMD5Validation is set to false and 
> useTransactionalContentMD5 is set to true and there is no overall MD5.
>  * Fixed a bug where retries were happening immediately if a socket exception 
> was thrown.
>  * In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
> This check has been moved to create() and uploadProperties() in 
> CloudFileShare.
>  * Prefix support for listing files and directories.
>  * Added support for setting public access when creating a blob container
>  * The public access setting on a blob container is now a container property 
> returned from downloadProperties.
>  * Add Message now modifies the PopReceipt, Id, NextVisibleTime, 
> InsertionTime, and ExpirationTime properties of its CloudQueueMessage 
> parameter.
>  * Populate content MD5 for range gets on Blobs and Files.
>  * Added support in Page Blob for incremental copy.
>  * Added large BlockBlob upload support. Blocks can now support sizes up to 
> 100 MB.
>  * Added a new, memory-optimized upload strategy for the upload* APIs. This 
> algorithm only applies for blocks greater than 4MB and when 
> storeBlobContentMD5 and Client-Side Encryption are disabled.
>  * getQualifiedUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedUri() instead. This new function will return the blob 
> including the snapshot (if present) and no SAS token.
>  * getQualifiedStorageUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedStorageUri() instead. This new function will return the 
> blob including the snapshot (if present) and no SAS token.
>  * Fixed a bug where copying from a blob that included a SAS token and a 
> snapshot ommitted the SAS token.
>  * Fixed a bug in client-side encryption for tables that was preventing the 
> Java client from decrypting entities encrypted with the .NET client, and vice 
> versa.
>  * Added support for server-side encryption.
>  * Added support for getBlobReferenceFromServer methods on CloudBlobContainer 
> to support retrieving a blob without knowing its type.
>  * Fixed a bug in the retry policies where 300 status codes were being 
> retried when they shouldn't be.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14516) Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure Storage Clients

2017-06-09 Thread Georgi Chalakov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Georgi Chalakov updated HADOOP-14516:
-
Status: Patch Available  (was: Open)

> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients
> ---
>
> Key: HADOOP-14516
> URL: https://issues.apache.org/jira/browse/HADOOP-14516
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 3.0.0-alpha3
>Reporter: Georgi Chalakov
>
> Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft 
> Azure Storage Clients. We are currently using version 4.2.0 of the SDK.
> Azure Storage Clients changes between 4.2 and 5.2:
>  * Fixed Exists() calls on Shares and Directories to now populate metadata. 
> This was already being done for Files.
>  * Changed blob constants to support up to 256 MB on put blob for block 
> blobs. The default value for put blob threshold has also been updated to half 
> of the maximum, or 128 MB currently.
>  * Fixed a bug that prevented setting content MD5 to true when creating a new 
> file.
>  * Fixed a bug where access conditions, options, and operation context were 
> not being passed when calling openWriteExisting() on a page blob or a file.
>  * Fixed a bug where an exception was being thrown on a range get of a blob 
> or file when the options disableContentMD5Validation is set to false and 
> useTransactionalContentMD5 is set to true and there is no overall MD5.
>  * Fixed a bug where retries were happening immediately if a socket exception 
> was thrown.
>  * In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
> This check has been moved to create() and uploadProperties() in 
> CloudFileShare.
>  * Prefix support for listing files and directories.
>  * Added support for setting public access when creating a blob container
>  * The public access setting on a blob container is now a container property 
> returned from downloadProperties.
>  * Add Message now modifies the PopReceipt, Id, NextVisibleTime, 
> InsertionTime, and ExpirationTime properties of its CloudQueueMessage 
> parameter.
>  * Populate content MD5 for range gets on Blobs and Files.
>  * Added support in Page Blob for incremental copy.
>  * Added large BlockBlob upload support. Blocks can now support sizes up to 
> 100 MB.
>  * Added a new, memory-optimized upload strategy for the upload* APIs. This 
> algorithm only applies for blocks greater than 4MB and when 
> storeBlobContentMD5 and Client-Side Encryption are disabled.
>  * getQualifiedUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedUri() instead. This new function will return the blob 
> including the snapshot (if present) and no SAS token.
>  * getQualifiedStorageUri() has been deprecated for Blobs. Please use 
> getSnapshotQualifiedStorageUri() instead. This new function will return the 
> blob including the snapshot (if present) and no SAS token.
>  * Fixed a bug where copying from a blob that included a SAS token and a 
> snapshot ommitted the SAS token.
>  * Fixed a bug in client-side encryption for tables that was preventing the 
> Java client from decrypting entities encrypted with the .NET client, and vice 
> versa.
>  * Added support for server-side encryption.
>  * Added support for getBlobReferenceFromServer methods on CloudBlobContainer 
> to support retrieving a blob without knowing its type.
>  * Fixed a bug in the retry policies where 300 status codes were being 
> retried when they shouldn't be.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14516) Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure Storage Clients

2017-06-09 Thread Georgi Chalakov (JIRA)
Georgi Chalakov created HADOOP-14516:


 Summary: Update WASB driver to use the latest version (5.2.0) of 
SDK for Microsoft Azure Storage Clients
 Key: HADOOP-14516
 URL: https://issues.apache.org/jira/browse/HADOOP-14516
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/azure
Affects Versions: 3.0.0-alpha3
Reporter: Georgi Chalakov


Update WASB driver to use the latest version (5.2.0) of SDK for Microsoft Azure 
Storage Clients. We are currently using version 4.2.0 of the SDK.

Azure Storage Clients changes between 4.2 and 5.2:

 * Fixed Exists() calls on Shares and Directories to now populate metadata. 
This was already being done for Files.
 * Changed blob constants to support up to 256 MB on put blob for block blobs. 
The default value for put blob threshold has also been updated to half of the 
maximum, or 128 MB currently.
 * Fixed a bug that prevented setting content MD5 to true when creating a new 
file.
 * Fixed a bug where access conditions, options, and operation context were not 
being passed when calling openWriteExisting() on a page blob or a file.
 * Fixed a bug where an exception was being thrown on a range get of a blob or 
file when the options disableContentMD5Validation is set to false and 
useTransactionalContentMD5 is set to true and there is no overall MD5.
 * Fixed a bug where retries were happening immediately if a socket exception 
was thrown.
 * In CloudFileShareProperties, setShareQuota() no longer asserts in bounds. 
This check has been moved to create() and uploadProperties() in CloudFileShare.
 * Prefix support for listing files and directories.
 * Added support for setting public access when creating a blob container
 * The public access setting on a blob container is now a container property 
returned from downloadProperties.
 * Add Message now modifies the PopReceipt, Id, NextVisibleTime, InsertionTime, 
and ExpirationTime properties of its CloudQueueMessage parameter.
 * Populate content MD5 for range gets on Blobs and Files.
 * Added support in Page Blob for incremental copy.
 * Added large BlockBlob upload support. Blocks can now support sizes up to 100 
MB.
 * Added a new, memory-optimized upload strategy for the upload* APIs. This 
algorithm only applies for blocks greater than 4MB and when storeBlobContentMD5 
and Client-Side Encryption are disabled.
 * getQualifiedUri() has been deprecated for Blobs. Please use 
getSnapshotQualifiedUri() instead. This new function will return the blob 
including the snapshot (if present) and no SAS token.
 * getQualifiedStorageUri() has been deprecated for Blobs. Please use 
getSnapshotQualifiedStorageUri() instead. This new function will return the 
blob including the snapshot (if present) and no SAS token.
 * Fixed a bug where copying from a blob that included a SAS token and a 
snapshot ommitted the SAS token.
 * Fixed a bug in client-side encryption for tables that was preventing the 
Java client from decrypting entities encrypted with the .NET client, and vice 
versa.
 * Added support for server-side encryption.
 * Added support for getBlobReferenceFromServer methods on CloudBlobContainer 
to support retrieving a blob without knowing its type.
 * Fixed a bug in the retry policies where 300 status codes were being retried 
when they shouldn't be.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045236#comment-16045236
 ] 

Hudson commented on HADOOP-14465:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11854 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11854/])
HADOOP-14465. LdapGroupsMapping - support user and group search base. (liuml07: 
rev a2121cb0d907be439d19cd1165a0371b37a5fe68)
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/GroupsMapping.md
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java


> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14503) Make RollingAverages a mutable metric

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045235#comment-16045235
 ] 

Hadoop QA commented on HADOOP-14503:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
38s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
13s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
34s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 57s{color} | {color:orange} root: The patch generated 4 new + 35 unchanged - 
1 fixed = 39 total (was 36) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 21s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}187m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery |
|   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | 
hadoop.hdfs.server.datanode.metrics.TestDataNodeOutlierDetectionViaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14503 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872338/HADOOP-14503.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fd64c1368660 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 325163f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12505/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12505/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Commented] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045227#comment-16045227
 ] 

Hadoop QA commented on HADOOP-14395:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
21s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
14s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
43s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m  
4s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 2 new + 257 unchanged 
- 0 fixed = 259 total (was 257) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
20s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 39s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
27s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 64m 39s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}170m 12s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.TestFilterFileSystem |
|   | hadoop.fs.TestHarFileSystem |
|   | hadoop.fs.sftp.TestSFTPFileSystem |
|   | hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14395 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872343/HADOOP-14395.02-trunk.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 00f9b0bb7c69 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 325163f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12507/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 

[jira] [Updated] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14465:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha4
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}} and {{branch-2}} branches. Thanks [~shwethags] for your 
contribution. Thanks [~jnp] and [~ste...@apache.org] for review and discussion.

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Fix For: 2.9.0, 3.0.0-alpha4
>
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-09 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045212#comment-16045212
 ] 

Aaron Fabbri commented on HADOOP-14457:
---

Thank you for updating your patch [~mackrorysd].  The v9 code looks good.

I would +1 this except one concern, which I should have mentioned earlier had 
it occurred to me:  This will likely have a negative performance impact for 
S3Guard w/ Dynamo.  Correct me if I am wrong, but the main purpose of this code 
is to fix the fact that S3A's "broken but fast" create() implementation breaks 
authoritative (fully-cached) directory listings for the MetadataStore (since 
the S3A client is not reporting directory creations which impact said 
authoritative listings of ancestors).

In terms of performance with the DynamoDBMetadataStore, however, this code is 
bad for two reasons:
1. DynamoDBMetadataStore doesn't implement authoritative listings.
2. DynamoDBMetadataStore already populates ancestors due to internal 
implementation details.

I do think authoritative listing is valuable though.  Not only for future 
performance gains we can get by short-circuiting S3 list, but for the extra 
testing and logic checks we get from having the LocalMetadataStore and 
associated contract tests around it.

I wonder if this is the time to introduce a capabilities query interface on 
MetadataStore.  Then we could rename the function to 
{{S3Guard#addAncestorsIfAuthoritative(..)}} and have it look like this:

{code}
  /** Add ancestors of qualifiedPath to MetadataStore iff it supports 
authoritative listings.*/
  public static void addAncestorsIfAuthoritative(MetadataStore metadataStore,
  Path qualifiedPath, String username) throws IOException {
if (! metadataStore.getOption(SUPPORTS_AUTHORITATIVE_DIRS)) {
  return;
}
  ...
{code}

I also like the capabilities query idea because it allows us to write stricter 
MetadataStore contract tests.

Thanks for your patience on the back and forth on this.  I'd be happy to +1 
this and follow up quickly with a patch that adds the capability stuff + the 
new if() condition above, if you and [~ste...@apache.org] or [~liuml07] agree 
with my comments here.





> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch, HADOOP-14457-HADOOP-13345.005.patch, 
> HADOOP-14457-HADOOP-13345.006.patch, HADOOP-14457-HADOOP-13345.007.patch, 
> HADOOP-14457-HADOOP-13345.008.patch, HADOOP-14457-HADOOP-13345.009.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import 

[jira] [Commented] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045178#comment-16045178
 ] 

Hadoop QA commented on HADOOP-14465:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
23s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
56s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 58m 39s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14465 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872354/HADOOP-14465-v5.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 3eb0bc8fd42b 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5578af8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12509/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12509/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12509/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> 

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-09 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045169#comment-16045169
 ] 

Junping Du commented on HADOOP-14284:
-

bq. If a user wants Guava out of the classpath for the client side, why not 
just use the shaded client?
I think we are working towards this direction. My understanding is YARN/MR 
client haven't finish shade practice. Isn't it?

bq. In either of those approaches folks are still out of luck on the server 
side. I thought that was the point of this jira?
There shouldn't be too much concern on server side as application will have 
separated classloader so different version of guava should get loaded for 
different places. IMO, this JIRA approach is hardly being accepted not only 
because its own complexity but also open a painful practice for other 
dependencies being upgrade with incompatible APIs.

bq. If we're only concerned about client side impact, then we should just 
upgrade guava and close this jira. 
We can not simply upgrade guava before shaded yarn/mr client works done. 
Otherwise, downstream projects will get stuck. Do I miss anything here?

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) Switch from aalto-xml to woodstox to handle odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045127#comment-16045127
 ] 

Jonathan Eagles commented on HADOOP-14501:
--

[~ste...@apache.org], I also tried the jdk stax implementation and it was much, 
much slow than either the woodstox or alto xml. 
https://dzone.com/articles/xml-unmarshalling-benchmark is outdated but the 
numbers are representative of what I was seeing.

> Switch from aalto-xml to woodstox to handle odd XML features
> 
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, HADOOP-14501.5.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14465:
---
Component/s: security

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common, security
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric

2017-06-09 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14503:

Attachment: HADOOP-14503.004.patch

Thanks for the review [~arpitagarwal]. Updated patch v04 to address the 
comments. And fixed failing unit tests.

> Make RollingAverages a mutable metric
> -
>
> Key: HADOOP-14503
> URL: https://issues.apache.org/jira/browse/HADOOP-14503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, 
> HADOOP-14503.003.patch, HADOOP-14503.004.patch
>
>
> RollingAverages metric extends on MutableRatesWithAggregation metric and 
> maintains a group of rolling average metrics. This class should be allowed to 
> register as a metric with the MetricSystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) Switch from aalto-xml to woodstox to handle odd XML features

2017-06-09 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045114#comment-16045114
 ] 

Andrew Wang commented on HADOOP-14501:
--

This LGTM +1, thanks for working on this [~jeagles]! Could you also respond to 
Steve's question why we aren't using the JDK stax implementation? I'm guessing 
the answer is performance.

> Switch from aalto-xml to woodstox to handle odd XML features
> 
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, HADOOP-14501.5.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) Switch from aalto-xml to woodstox to handle odd XML features

2017-06-09 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14501:
-
Summary: Switch from aalto-xml to woodstox to handle odd XML features  
(was: aalto-xml cannot handle some odd XML features)

> Switch from aalto-xml to woodstox to handle odd XML features
> 
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, HADOOP-14501.5.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045099#comment-16045099
 ] 

Jitendra Nath Pandey commented on HADOOP-14465:
---

+1 for the latest patch.

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045095#comment-16045095
 ] 

Mingliang Liu commented on HADOOP-14465:


I upload the v5 patch addressing all my own comments above. I'm still +1 on the 
patch. But it's good to have a second opinion before commit. [~jnp] do you have 
time to review? Thanks,

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045090#comment-16045090
 ] 

Hadoop QA commented on HADOOP-14501:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
38s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
19s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
15s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  9m 
54s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 54s{color} | {color:orange} root: The patch generated 1 new + 263 unchanged 
- 1 fixed = 264 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
25s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
29s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
42s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 81m 22s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14501 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872337/HADOOP-14501.5.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux e9424aa9317a 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 325163f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12504/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 

[jira] [Updated] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14465:
---
Attachment: HADOOP-14465-v5.patch

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch, HADOOP-14465-v5.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14515) Specifically configure zookeeper-related log levels in KMS log4j

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045076#comment-16045076
 ] 

Hadoop QA commented on HADOOP-14515:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14515 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872350/HADOOP-14515.01.patch 
|
| Optional Tests |  asflicense  unit  |
| uname | Linux 79f03f92c43a 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5578af8 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12508/testReport/ |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12508/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Specifically configure zookeeper-related log levels in KMS log4j
> 
>
> Key: HADOOP-14515
> URL: https://issues.apache.org/jira/browse/HADOOP-14515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14515.01.patch
>
>
> When investigating a case, we tried to turn on KMS DEBUG by setting the root 
> logger in the log4j to DEBUG. This ends up making 
> {{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log 
> file, which made the kms.log rotate very quickly.
> We should keep zookeeper's log unaffected by the root logger, and only turn 
> it on when interested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3) Output directories are not cleaned up before the reduces run

2017-06-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045075#comment-16045075
 ] 

Hudson commented on HADOOP-3:
-

FAILURE: Integrated in Jenkins build HBase-2.0 #17 (See 
[https://builds.apache.org/job/HBase-2.0/17/])
HBASE-18033 Fix license check for hadoop-3.x (busbey: rev 
eb5c5a9bc8aa3e39b7da87d0a4130da480ba3a95)
* (edit) hbase-resource-bundle/src/main/resources/META-INF/LICENSE.vm
* (edit) hbase-resource-bundle/src/main/resources/supplemental-models.xml


> Output directories are not cleaned up before the reduces run
> 
>
> Key: HADOOP-3
> URL: https://issues.apache.org/jira/browse/HADOOP-3
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 0.1.0
>Reporter: Owen O'Malley
>Assignee: Owen O'Malley
>Priority: Minor
> Fix For: 0.1.0
>
> Attachments: clean-out-dir.patch, noclobber.patch
>
>
> The output directory for the reduces is not cleaned up and therefore if you 
> can see left overs from previous runs, if they had more reduces. For example, 
> if you run the application once with reduces=10 and then rerun with 
> reduces=8, your output directory will have frag0 to frag9 with the 
> first 8 fragments from the second run and the last 2 fragments from the first 
> run.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045067#comment-16045067
 ] 

Hadoop QA commented on HADOOP-14394:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
11s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
11s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
41s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
31s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 24s{color} | {color:orange} root: The patch generated 1 new + 257 unchanged 
- 0 fixed = 258 total (was 257) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
21s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 71m 26s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
41s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m  0s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure150 |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14394 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872312/HADOOP-14394.05.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9fed6e1660d 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99634d1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12502/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| checkstyle | 

[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-09 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045066#comment-16045066
 ] 

Sean Busbey commented on HADOOP-14284:
--

I don't understand why we'd shade just the client modules for YARN. If a user 
wants Guava out of the classpath for the client side, why not just use the 
shaded client? In either of those approaches folks are still out of luck on the 
server side. I thought that was the point of this jira?

{quote}
Unfortunately for HDFS, there are a bunch of downstreams incorrectly including 
our server artifacts, so for HDFS, I think we need to shade those too.
{quote}

This seems like the wrong approach IMHO. There's no incentive for folks to ever 
start respecting the public/not public interface we put up. If we're only 
concerned about client side impact, then we should just upgrade guava and close 
this jira. That way if folks run into a problem with the guava uprade, we just 
tell them to use the correct client jars instead of server jars. we can even 
put it in the release note for the guava upgrade.

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14515) Specifically configure zookeeper-related log levels in KMS log4j

2017-06-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14515:
---
Summary: Specifically configure zookeeper-related log levels in KMS log4j  
(was: Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j)

> Specifically configure zookeeper-related log levels in KMS log4j
> 
>
> Key: HADOOP-14515
> URL: https://issues.apache.org/jira/browse/HADOOP-14515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14515.01.patch
>
>
> When investigating a case, we tried to turn on KMS DEBUG by setting the root 
> logger in the log4j to DEBUG. This ends up making 
> {{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log 
> file, which made the kms.log rotate very quickly.
> We should keep zookeeper's log unaffected by the root logger, and only turn 
> it on when interested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14515) Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j

2017-06-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14515:
---
Status: Patch Available  (was: Open)

> Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j
> ---
>
> Key: HADOOP-14515
> URL: https://issues.apache.org/jira/browse/HADOOP-14515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14515.01.patch
>
>
> When investigating a case, we tried to turn on KMS DEBUG by setting the root 
> logger in the log4j to DEBUG. This ends up making 
> {{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log 
> file, which made the kms.log rotate very quickly.
> We should keep zookeeper's log unaffected by the root logger, and only turn 
> it on when interested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14515) Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j

2017-06-09 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14515:
---
Attachment: HADOOP-14515.01.patch

> Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j
> ---
>
> Key: HADOOP-14515
> URL: https://issues.apache.org/jira/browse/HADOOP-14515
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-14515.01.patch
>
>
> When investigating a case, we tried to turn on KMS DEBUG by setting the root 
> logger in the log4j to DEBUG. This ends up making 
> {{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log 
> file, which made the kms.log rotate very quickly.
> We should keep zookeeper's log unaffected by the root logger, and only turn 
> it on when interested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14503) Make RollingAverages a mutable metric

2017-06-09 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16045038#comment-16045038
 ] 

Arpit Agarwal commented on HADOOP-14503:


Thanks for the updated patch [~hanishakoneru]. A couple of comments:
# init also needs to reset snapshot and averages here:
{code}
if (hasChanged) {
  if (scheduledTask != null) {
scheduledTask.cancel(true);
  }
  // Discard previously collected samples as windowSize and/or numWindows
  // has changed.
  innerMetrics = new MutableRatesWithAggregation();
{code}
# Unused method MutableRollingAverages#rename.

Also the failed unit tests from the v2 patch Jenkins run look related.

> Make RollingAverages a mutable metric
> -
>
> Key: HADOOP-14503
> URL: https://issues.apache.org/jira/browse/HADOOP-14503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, 
> HADOOP-14503.003.patch
>
>
> RollingAverages metric extends on MutableRatesWithAggregation metric and 
> maintains a group of rolling average metrics. This class should be allowed to 
> register as a metric with the MetricSystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14515) Specifically configure org.apache.zookeeper.ClientCnxn in KMS log4j

2017-06-09 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14515:
--

 Summary: Specifically configure org.apache.zookeeper.ClientCnxn in 
KMS log4j
 Key: HADOOP-14515
 URL: https://issues.apache.org/jira/browse/HADOOP-14515
 Project: Hadoop Common
  Issue Type: Improvement
  Components: kms
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


When investigating a case, we tried to turn on KMS DEBUG by setting the root 
logger in the log4j to DEBUG. This ends up making 
{{org.apache.zookeeper.ClientCnxn}} to generate 199.2M out of a 200M log file, 
which made the kms.log rotate very quickly.

We should keep zookeeper's log unaffected by the root logger, and only turn it 
on when interested.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14395:
---
Attachment: HADOOP-14395.02-trunk.patch

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14395:
---
Attachment: (was: HADOOP-14395.02-trunk.patch)

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14284) Shade Guava everywhere

2017-06-09 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044995#comment-16044995
 ] 

Karthik Kambatla commented on HADOOP-14284:
---

I agree with [~vinodkv] on shading only the YARN/MR client modules.

For YARN, that is yarn-common, yarn-client, and yarn-api modules. For MR, that 
should be mapreduce-client-* modules. We probably don't need to shade 
hadoop-mapreduce-client-hs and hadoop-mapreduce-client-hs-plugins jars though, 
as they are for the HistoryServer and have no @Stable APIs. 

In cases where devs extend YARN classes like the SchedulingPolicy in 
FairScheduler or implement their own scheduler, the dev will be responsible for 
ensuring they don't use guava or use it with a version that is consistent with 
what Hadoop uses. I expect these devs to be sophisticated enough to figure this 
out. That said, we should probably still call out cases like these in the 
compatibility guide. /cc [~templedf]

> Shade Guava everywhere
> --
>
> Key: HADOOP-14284
> URL: https://issues.apache.org/jira/browse/HADOOP-14284
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
> Attachments: HADOOP-14238.pre001.patch, HADOOP-14284.002.patch, 
> HADOOP-14284.004.patch, HADOOP-14284.007.patch, HADOOP-14284.010.patch, 
> HADOOP-14284.012.patch
>
>
> HADOOP-10101 upgraded the guava version for 3.x to 21.
> Guava is broadly used by Java projects that consume our artifacts. 
> Unfortunately, these projects also consume our private artifacts like 
> {{hadoop-hdfs}}. They also are unlikely on the new shaded client introduced 
> by HADOOP-11804, currently only available in 3.0.0-alpha2.
> We should shade Guava everywhere to proactively avoid breaking downstreams. 
> This isn't a requirement for all dependency upgrades, but it's necessary for 
> known-bad dependencies like Guava.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044994#comment-16044994
 ] 

Hadoop QA commented on HADOOP-14395:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m 21s{color} 
| {color:red} HADOOP-14395 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14395 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872342/HADOOP-14395.02-trunk.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12506/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14395:
---
Attachment: HADOOP-14395.02-trunk.patch

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14395) Provide Builder pattern for DistributedFileSystem.append

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14395:
---
Attachment: HADOOP-14395.02.patch

> Provide Builder pattern for DistributedFileSystem.append
> 
>
> Key: HADOOP-14395
> URL: https://issues.apache.org/jira/browse/HADOOP-14395
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14395.00.patch, HADOOP-14395.00-trunk.patch, 
> HADOOP-14395.01.patch, HADOOP-14395.01-trunk.patch, HADOOP-14395.02.patch, 
> HADOOP-14395.02-trunk.patch
>
>
> Follow HADOOP-14394, it should also provide a {{Builder}} API for 
> {{DistributedFileSystem#append}}.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044981#comment-16044981
 ] 

Hudson commented on HADOOP-14512:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11851 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11851/])
HADOOP-14512. WASB atomic rename should not throw exception if the file 
(liuml07: rev 325163f23f727e82379d4a385b73aa3a04a510f6)
* (edit) 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java


> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14503) Make RollingAverages a mutable metric

2017-06-09 Thread Hanisha Koneru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hanisha Koneru updated HADOOP-14503:

Attachment: HADOOP-14503.003.patch

Updated patch v03 with the following change:
Discarding any existing samples if RollingAverages parameters (Window size or 
Num Windows) is changed.
Thanks [~arpitagarwal] for pointing it out.

> Make RollingAverages a mutable metric
> -
>
> Key: HADOOP-14503
> URL: https://issues.apache.org/jira/browse/HADOOP-14503
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Hanisha Koneru
>Assignee: Hanisha Koneru
> Attachments: HADOOP-14503.001.patch, HADOOP-14503.002.patch, 
> HADOOP-14503.003.patch
>
>
> RollingAverages metric extends on MutableRatesWithAggregation metric and 
> maintains a group of rolling average metrics. This class should be allowed to 
> register as a metric with the MetricSystem.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: HADOOP-14501.5.patch

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, HADOOP-14501.5.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14512:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.2
   3.0.0-alpha4
   Status: Resolved  (was: Patch Available)

Committed to {{trunk}}, {{branch-2}} and {{branch-2.8}} branches. Thanks for 
your contribution [~onpduo]. Thanks for your review [~nitin.ve...@gmail.com] 
and [~shanem].

> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Fix For: 3.0.0-alpha4, 2.8.2
>
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044936#comment-16044936
 ] 

Hadoop QA commented on HADOOP-14501:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
32s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
37s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
27s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
47s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 47s{color} 
| {color:red} root in the patch failed. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  4s{color} | {color:orange} root: The patch generated 1 new + 263 unchanged 
- 1 fixed = 264 total (was 264) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
26s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
11s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 27s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
26s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 66m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14501 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872305/HADOOP-14501.4.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux ea3678f2f27e 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99634d1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12503/artifact/patchprocess/branch-findbugs-hadoop-common-project_hadoop-common-warnings.html
 |
| mvninstall | 

[jira] [Resolved] (HADOOP-14509) InconsistentAmazonS3Client adds extra paths to listStatus() after delete.

2017-06-09 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri resolved HADOOP-14509.
---
Resolution: Duplicate

Resolving as duplicate, thank you [~mackrorysd].

> InconsistentAmazonS3Client adds extra paths to listStatus() after delete.
> -
>
> Key: HADOOP-14509
> URL: https://issues.apache.org/jira/browse/HADOOP-14509
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Aaron Fabbri
>Assignee: Sean Mackrory
>
> I identified a potential issue in code that simulates list-after-delete 
> inconsistency when code reviewing HADOOP-13760.  It appeared to work for the 
> existing test cases but now that we are using the inconsistency injection 
> code for general testing (e.g. HADOOP-14488) we need to make sure this stuff 
> is correct.  
> Deliverable is to make sure 
> {{InconsistentAmazonS3Client#restoreListObjects()}} is correct.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044849#comment-16044849
 ] 

Hadoop QA commented on HADOOP-14512:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
18s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 51s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14512 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872306/HADOOP-14512.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux a486d27c94b9 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99634d1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12501/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12501/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename 

[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Status: Patch Available  (was: Open)

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14502) Confusion/name conflict between NameNodeActivity#BlockReportNumOps and RpcDetailedActivity#BlockReportNumOps

2017-06-09 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044841#comment-16044841
 ] 

Chen Liang commented on HADOOP-14502:
-

+1 for v002 patch.

> Confusion/name conflict between NameNodeActivity#BlockReportNumOps and 
> RpcDetailedActivity#BlockReportNumOps
> 
>
> Key: HADOOP-14502
> URL: https://issues.apache.org/jira/browse/HADOOP-14502
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Erik Krogen
>Assignee: Erik Krogen
>Priority: Minor
>  Labels: Incompatible
> Attachments: HADOOP-14502.000.patch, HADOOP-14502.001.patch, 
> HADOOP-14502.002.patch
>
>
> Currently the {{BlockReport(NumOps|AvgTime)}} metrics emitted under the 
> {{RpcDetailedActivity}} context and those emitted under the 
> {{NameNodeActivity}} context are actually reporting different things despite 
> having the same name. {{NameNodeActivity}} reports the count/time of _per 
> storage_ block reports, whereas {{RpcDetailedActivity}} reports the 
> count/time of _per datanode_ block reports. This makes for a confusing 
> experience with two metrics having the same name reporting different values. 
> We already have the {{StorageBlockReportsOps}} metric under 
> {{NameNodeActivity}}. Can we make {{StorageBlockReport}} a {{MutableRate}} 
> metric and remove {{NameNodeActivity#BlockReport}} metric? Open to other 
> suggestions about how to address this as well. The 3.0 release seems a good 
> time to make this incompatible change.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14394) Provide Builder pattern for DistributedFileSystem.create

2017-06-09 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14394?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-14394:
---
Attachment: HADOOP-14394.05.patch

Fix test failure and check style warnings.

The failure is due to {{NameNodeConnector#checkAndMarkRunning}} does not call 
{{recursive()}} and we changed the create behavior to not creating parent 
directory (i.e., {{recursive}}) by default.

> Provide Builder pattern for DistributedFileSystem.create
> 
>
> Key: HADOOP-14394
> URL: https://issues.apache.org/jira/browse/HADOOP-14394
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
> Attachments: HADOOP-14394.00.patch, HADOOP-14394.01.patch, 
> HADOOP-14394.02.patch, HADOOP-14394.03.patch, HADOOP-14394.04.patch, 
> HADOOP-14394.05.patch
>
>
> This JIRA continues to refine the {{FSOutputStreamBuilder}} interface 
> introduced in HDFS-11170. 
> It should also provide a spec for the Builder API.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14465) LdapGroupsMapping - support user and group search base

2017-06-09 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044800#comment-16044800
 ] 

Mingliang Liu commented on HADOOP-14465:


+1

Minor comments:
# By
{quote}
use getTrimmed() to strip off whitespace.
{quote}
I think Steve was saying
{code}
userbaseDN = conf.get(USER_BASE_DN_KEY, baseDN).trim();
{code}
to be replaced with
{code}
userbaseDN = conf.getTrimmed(USER_BASE_DN_KEY, baseDN);
{code}
The first will work though I think.
# The two test code _may_ be reused by creating a helper method: {{private void 
testGetGroupsWithBaseDN(Configuration conf, String userBaseDN, String 
groupBaseDN)}}.

> LdapGroupsMapping - support user and group search base
> --
>
> Key: HADOOP-14465
> URL: https://issues.apache.org/jira/browse/HADOOP-14465
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Reporter: Shwetha G S
>Assignee: Shwetha G S
> Attachments: 
> 0001-HADOOP-14465-LdapGroupsMapping-support-user-and-grou.patch, 
> HADOOP-14465-v2.patch, HADOOP-14465-v4.patch
>
>
> org.apache.hadoop.security.LdapGroupsMapping currently supports 
> hadoop.security.group.mapping.ldap.base as search base for both user and 
> group searches. However, this doesn't work when user and group search bases 
> are different like ou=Users,dc=xxx,dc=com and ou=Groups,dc=xxx,dc=com. 
> Expose different configs for user and group search base which defaults to the 
> existing hadoop.security.group.mapping.ldap.base config



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-14512:

Status: Patch Available  (was: In Progress)

> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-14512:

Attachment: HADOOP-14512.002.patch

Address all the comments.

> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-14512:

Status: In Progress  (was: Patch Available)

> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-14512.001.patch, HADOOP-14512.002.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: HADOOP-14501.4.patch

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: (was: HADOOP-14501.4.patch)

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Status: Open  (was: Patch Available)

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: HADOOP-14501.4.patch

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: HADOOP-14501.4-branch-2.patch
HADOOP-14501.4.patch

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: (was: HADOOP-14501.4.patch)

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501.4-branch-2.patch, HADOOP-14501.4.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14514) Successfully closed file can stay under-replicated.

2017-06-09 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-14514:
---

 Summary: Successfully closed file can stay under-replicated.
 Key: HADOOP-14514
 URL: https://issues.apache.org/jira/browse/HADOOP-14514
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee
Priority: Critical


If a certain set of conditions hold at the time of a file creation, a block of 
the file can stay under-replicated.  This is because the block is mistakenly 
taken out of the under-replicated block queue and never gets reevaluated.

Re-evaluation can be triggered if
- a replica containing node dies.
- setrep is called
- NN repl queues are reinitialized (NN failover or restart)

If none of these happens, the block stays under-replicated. 

Here is how it happens.
1) A replica is finalized, but the ACK does not reach the upstream in time. IBR 
is also delayed.
2) A close recovery happens, which updates the gen stamp of "healthy" replicas.
3) The file is closed with the healthy replicas. It is added to the replication 
queue.
4) A replication is scheduled, so it is added to the pending replication list. 
The replication target is picked as the failed node in 1).
5) The old IBR is finally received for the failed/excluded node. In the 
meantime, the replication fails, because there is already a finalized replica 
(with older gen stamp) on the node.
6) The IBR processing removes the block from the pending list, adds it to 
corrupt replicas list, and then issues invalidation. Since the block is in 
neither replication queue nor pending list, it stays under-replicated.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14513) A little performance improvement of HarFileSystem

2017-06-09 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044742#comment-16044742
 ] 

Ravi Prakash commented on HADOOP-14513:
---

Hi Hu!

Thanks for your contribution. Do you have any benchmarks / profiles which show 
the improvement? Do you know if the JVM doesn't already optimize it ? 
https://en.wikipedia.org/wiki/Loop-invariant_code_motion


> A little performance improvement of HarFileSystem
> -
>
> Key: HADOOP-14513
> URL: https://issues.apache.org/jira/browse/HADOOP-14513
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha3
>Reporter: hu xiaodong
>Assignee: hu xiaodong
>Priority: Trivial
> Attachments: HADOOP-14513.001.patch
>
>
> In the Java source of HarFileSystem.java:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // I think p.depth() need not be loop many times, depth() is a complex 
> calculation
> for (int i=0; i< p.depth(); i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}
>  
> I think the fellow is more suitable:
> {code:title=HarFileSystem.java|borderStyle=solid}
> ...
> ...
> private Path archivePath(Path p) {
> Path retPath = null;
> Path tmp = p;
> 
> // just loop once
> for (int i=0,depth=p.depth(); i< depth; i++) {
>   if (tmp.toString().endsWith(".har")) {
> retPath = tmp;
> break;
>   }
>   tmp = tmp.getParent();
> }
> return retPath;
>   }
> ...
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044693#comment-16044693
 ] 

Hadoop QA commented on HADOOP-14501:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 
26s{color} | {color:red} hadoop-common-project/hadoop-common in trunk has 19 
extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
41s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m  1s{color} | {color:orange} root: The patch generated 1 new + 263 unchanged 
- 1 fixed = 264 total (was 264) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.sftp.TestSFTPFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14501 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872276/HADOOP-14501.3.patch |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 355ca545973e 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 99634d1 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 

[jira] [Commented] (HADOOP-14512) WASB atomic rename should not throw exception if the file is neither in src nor in dst when doing the rename

2017-06-09 Thread Shane Mainali (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044569#comment-16044569
 ] 

Shane Mainali commented on HADOOP-14512:


Thanks Duo, change looks good to me. I'd suggest tweaking the warning message 
to add "This must mean the rename already completed." or something like that.

> WASB atomic rename should not throw exception if the file is neither in src 
> nor in dst when doing the rename
> 
>
> Key: HADOOP-14512
> URL: https://issues.apache.org/jira/browse/HADOOP-14512
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-14512.001.patch
>
>
> During atomic rename operation, WASB creates a rename pending json file to 
> document which files need to be renamed and the destination. Then WASB will 
> read this file and rename all the files one by one.
> There is a recent customer incident in HBase showing a potential bug in the 
> atomic rename implementation,
> For example, below is a rename pending json file,
> {code}
> {
>   FormatVersion: "1.0",
>   OperationUTCTime: "2017-04-29 06:08:57.465",
>   OldFolderName: "hbase\/data\/default\/abc",
>   NewFolderName: "hbase\/.tmp\/data\/default\/abc",
>   FileList: [
> ".tabledesc",
> ".tabledesc\/.tableinfo.01",
> ".tmp",
> "08e698e0b7d4132c0456b16dcf3772af",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid",
> "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo",
> "08e698e0b7d4132c0456b16dcf3772af\/0",
>  "08e698e0b7d4132c0456b16dcf3772af\/0\/617294e0737e4d37920e1609cf539a83",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits",
> "08e698e0b7d4132c0456b16dcf3772af\/recovered.edits\/185.seqid"
>   ]
> }
> {code}  
> When HBase regionserver process (underlying is using WASB driver) was 
> renaming  "08e698e0b7d4132c0456b16dcf3772af\/.regioninfo", the regionserver 
> process crashed or the VM got rebooted due to system maintenence. When the 
> regionserver process started running again, it found the rename pending json 
> file and tried to redo the rename operation. 
> However, when it read the first file ".tabledesc" in the file list, it could 
> not find this file in src folder and it also could not find the file in 
> destination folder. It could not find it in src folder because the file had 
> already been renamed/moved to the destination folder. It could not find it in 
> destination folder because when HBase starts, it will clean up all the files 
> under /hbase/.tmp.
> The current implementation will throw exceptions saying
> {code}
> else {
> throw new IOException(
> "Attempting to complete rename of file " + srcKey + "/" + fileName
> + " during folder rename redo, and file was not found in source "
> + "or destination.");
>   }
> {code}
> This will cause HBase HMaster initialization failure and restart HMaster will 
> not work because the same exception will throw again.
> My proposal is that if during the redo, WASB finds a file not in src and not 
> in dst, WASB should just skip this file and process the next file rather than 
> throw the error and let user manually fix it. Reasons are
> 1. Since the rename pending json file contains file A, if the file A is not 
> in src, it must have been renamed.
> 2. if the file A is not in src and not in dst, the upper layer service must 
> have  removed it. One thing to note is that during the atomic rename, the 
> folder is locked. So the only situation the file gets deleted is when VM 
> reboots or service process crashes. When service process restarts, there 
> might be some operations happening before the atomic rename redo, like the 
> HBase example above.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14501) aalto-xml cannot handle some odd XML features

2017-06-09 Thread Jonathan Eagles (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles updated HADOOP-14501:
-
Attachment: HADOOP-14501.3.patch
HADOOP-14501.3-branch-2.patch

> aalto-xml cannot handle some odd XML features
> -
>
> Key: HADOOP-14501
> URL: https://issues.apache.org/jira/browse/HADOOP-14501
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Andrew Wang
>Assignee: Jonathan Eagles
>Priority: Blocker
> Attachments: HADOOP-14501.1.patch, HADOOP-14501.2.patch, 
> HADOOP-14501.3-branch-2.patch, HADOOP-14501.3.patch, 
> HADOOP-14501-branch-2.1.patch
>
>
> [~hgadre] tried testing solr with a Hadoop 3 client. He saw various test case 
> failures due to what look like functionality gaps in the new aalto-xml stax 
> implementation pulled in by HADOOP-14216:
> {noformat}
>[junit4]> Throwable #1: com.fasterxml.aalto.WFCException: Illegal XML 
> character ('ü' (code 252))
> 
>[junit4]> Caused by: com.fasterxml.aalto.WFCException: General entity 
> reference () encountered in entity expanding mode: operation not (yet) 
> implemented
> ...
>[junit4]> Throwable #1: org.apache.solr.common.SolrException: General 
> entity reference () encountered in entity expanding mode: operation 
> not (yet) implemented
> {noformat}
> These were from the following test case executions:
> {noformat}
> NOTE: reproduce with: ant test  -Dtestcase=DocumentAnalysisRequestHandlerTest 
> -Dtests.method=testCharsetOutsideDocument -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=und -Dtests.timezone=Atlantic/Faeroe 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=MBeansHandlerTest 
> -Dtests.method=testXMLDiffWithExternalEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=en-US -Dtests.timezone=US/Aleutian 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testExternalEntities -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> NOTE: reproduce with: ant test  -Dtestcase=XmlUpdateRequestHandlerTest 
> -Dtests.method=testNamedEntity -Dtests.seed=2F739D88D9C723CA 
> -Dtests.slow=true -Dtests.locale=hr -Dtests.timezone=America/Barbados 
> -Dtests.asserts=true -Dtests.file.encoding=US-ASCII
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044522#comment-16044522
 ] 

Hadoop QA commented on HADOOP-14457:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
13s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
29s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 20m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872269/HADOOP-14457-HADOOP-13345.009.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4bc9573eb806 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 6a06ed8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12499/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12499/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12499/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> 

[jira] [Updated] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-09 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14457:
---
Attachment: HADOOP-14457-HADOOP-13345.009.patch

> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Sean Mackrory
>Assignee: Sean Mackrory
> Attachments: HADOOP-14457-HADOOP-13345.001.patch, 
> HADOOP-14457-HADOOP-13345.002.patch, HADOOP-14457-HADOOP-13345.003.patch, 
> HADOOP-14457-HADOOP-13345.004.patch, HADOOP-14457-HADOOP-13345.005.patch, 
> HADOOP-14457-HADOOP-13345.006.patch, HADOOP-14457-HADOOP-13345.007.patch, 
> HADOOP-14457-HADOOP-13345.008.patch, HADOOP-14457-HADOOP-13345.009.patch
>
>
> Not a great test yet, but it at least reliably demonstrates the issue. 
> LocalMetadataStore will sometimes erroneously report that a directory is 
> empty with isAuthoritative = true when it *definitely* has children the 
> metadatastore should know about. It doesn't appear to happen if the children 
> are just directory. The fact that it's returning an empty listing is 
> concerning, but the fact that it says it's authoritative *might* be a second 
> bug.
> {code}
> diff --git 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
>  
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> index 78b3970..1821d19 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
> @@ -965,7 +965,7 @@ public boolean hasMetadataStore() {
>}
>  
>@VisibleForTesting
> -  MetadataStore getMetadataStore() {
> +  public MetadataStore getMetadataStore() {
>  return metadataStore;
>}
>  
> diff --git 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
>  
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> index 4339649..881bdc9 100644
> --- 
> a/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> +++ 
> b/hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractRename.java
> @@ -23,6 +23,11 @@
>  import org.apache.hadoop.fs.contract.AbstractFSContract;
>  import org.apache.hadoop.fs.FileSystem;
>  import org.apache.hadoop.fs.Path;
> +import org.apache.hadoop.fs.s3a.S3AFileSystem;
> +import org.apache.hadoop.fs.s3a.Tristate;
> +import org.apache.hadoop.fs.s3a.s3guard.DirListingMetadata;
> +import org.apache.hadoop.fs.s3a.s3guard.MetadataStore;
> +import org.junit.Test;
>  
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.dataset;
>  import static org.apache.hadoop.fs.contract.ContractTestUtils.writeDataset;
> @@ -72,4 +77,24 @@ public void testRenameDirIntoExistingDir() throws 
> Throwable {
>  boolean rename = fs.rename(srcDir, destDir);
>  assertFalse("s3a doesn't support rename to non-empty directory", rename);
>}
> +
> +  @Test
> +  public void testMkdirPopulatesFileAncestors() throws Exception {
> +final FileSystem fs = getFileSystem();
> +final MetadataStore ms = ((S3AFileSystem) fs).getMetadataStore();
> +final Path parent = path("testMkdirPopulatesFileAncestors/source");
> +try {
> +  fs.mkdirs(parent);
> +  final Path nestedFile = new Path(parent, "dir1/dir2/dir3/file4");
> +  byte[] srcDataset = dataset(256, 'a', 'z');
> +  writeDataset(fs, nestedFile, srcDataset, srcDataset.length,
> +  1024, false);
> +
> +  DirListingMetadata list = ms.listChildren(parent);
> +  assertTrue("MetadataStore falsely reports authoritative empty list",
> +  list.isEmpty() == Tristate.FALSE || !list.isAuthoritative());
> +} finally {
> +  fs.delete(parent, true);
> +}
> +  }
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14457) create() does not notify metadataStore of parent directories or ensure they're not existing files

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044486#comment-16044486
 ] 

Hadoop QA commented on HADOOP-14457:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
25s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
36s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 14s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 2 
new + 7 unchanged - 0 fixed = 9 total (was 7) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14457 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872260/HADOOP-14457-HADOOP-13345.008.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 13996aa2862a 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 6a06ed8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12498/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12498/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12498/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12498/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> create() does not notify metadataStore of parent directories or ensure 
> they're not existing files
> -
>
> Key: HADOOP-14457
> URL: https://issues.apache.org/jira/browse/HADOOP-14457
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>

[jira] [Commented] (HADOOP-14488) s3guard listStatus fails after renaming file into directory

2017-06-09 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044485#comment-16044485
 ] 

Hadoop QA commented on HADOOP-14488:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
28s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
38s{color} | {color:red} hadoop-tools/hadoop-aws in HADOOP-13345 has 1 extant 
Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
17s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
46s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 22m 33s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14488 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12872261/HADOOP-14488-HADOOP-13345-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux aac053d60d2c 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / 6a06ed8 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12497/artifact/patchprocess/branch-findbugs-hadoop-tools_hadoop-aws-warnings.html
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12497/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12497/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> s3guard listStatus fails after renaming file into directory
> ---
>
> Key: HADOOP-14488
> URL: https://issues.apache.org/jira/browse/HADOOP-14488
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-14488-HADOOP-13345-001.patch, 
> HADOOP-14488-HADOOP-13345-002.patch, HADOOP-14488-HADOOP-13345-003.patch, 
> output.txt
>
>
> Running scala integration test with inconsistent s3 client & local DDB enabled
> {code}
> 

[jira] [Commented] (HADOOP-14424) Add CRC32C performance test.

2017-06-09 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16044468#comment-16044468
 ] 

Masatake Iwasaki commented on HADOOP-14424:
---

Thanks for working on this, [~GeLiXin].

It would be nice if duplicate code in Crc32PerformanceTest and 
Crc32CPerformanceTest. I think it is ok to just add test cases for CRC32C to 
existing Crc32PerformanceTest. In that case, displaying diffs to all of 
previous benchmark results could be noisy. It should be omitted or just print 
diff to one of the results.
{noformat}
//compare result with previous
for(BenchResult p : previous) {
  final double diff = (result.mbps - p.mbps) / p.mbps * 100;
  printCell(String.format("%5.1f%%", diff), diffStr.length(), out);
}
previous.add(result);
  }
{noformat}

> Add CRC32C performance test.
> 
>
> Key: HADOOP-14424
> URL: https://issues.apache.org/jira/browse/HADOOP-14424
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0-alpha2
>Reporter: LiXin Ge
>Assignee: LiXin Ge
>Priority: Minor
>  Labels: test
> Attachments: HADOOP-14424.patch
>
>
> The default checksum algorithm of Hadoop is CRC32C, so we'd better add a new 
> test to compare Crc32C chunked verification implementations.
> This test is based on Crc32PerformanceTest, what I have done in this test is:
> 1.CRC32C performance test.
> 2.CRC32C is not supported by java.util.zip in JAVA JDK, so just remove it 
> from this test.
> 3.User can choose either direct buffer or non-directly buffer to run this 
> test manually.
> 4.Take use of verifyChunkedSumsByteArray for native to support non-directly 
> native test.
> The test result in my environment is:
> [root@master bin]# ./hadoop org.apache.hadoop.util.Crc32CPerformanceTest
>  java.version = 1.8.0_111
> java.runtime.name = Java(TM) SE Runtime Environment
>  java.runtime.version = 1.8.0_111-b14
>   java.vm.version = 25.111-b14
>java.vm.vendor = Oracle Corporation
>  java.vm.name = Java HotSpot(TM) 64-Bit Server VM
> java.vm.specification.version = 1.8
>java.specification.version = 1.8
>   os.arch = amd64
>   os.name = Linux
>os.version = 2.6.33.20
> Data Length = 64 MB
> Trials  = 5
> Direct Buffer Performance Table (bpc: byte-per-crc in MB/sec; #T: #Theads)
> |  bpc  | #T || PureJava ||   Native | % diff |
> |32 |  1 | 394.0 |4156.2 | 954.9% |
> |32 |  2 | 400.5 |3679.7 | 818.7% |
> |32 |  4 | 401.8 |2657.0 | 561.3% |
> |32 |  8 | 389.1 |1633.8 | 319.9% |
> |32 | 16 | 222.2 |1116.3 | 402.5% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |64 |  1 | 465.0 |5931.0 | 1175.5% |
> |64 |  2 | 468.8 |1839.2 | 292.3% |
> |64 |  4 | 460.4 |2968.3 | 544.7% |
> |64 |  8 | 452.4 |1925.7 | 325.6% |
> |64 | 16 | 246.9 |1291.8 | 423.3% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |   128 |  1 | 522.0 |6147.8 | 1077.6% |
> |   128 |  2 | 366.0 |4758.5 | 1200.2% |
> |   128 |  4 | 307.8 |3265.1 | 960.8% |
> |   128 |  8 | 283.6 |2092.2 | 637.6% |
> |   128 | 16 | 219.9 |1226.1 | 457.6% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |   256 |  1 | 550.7 |3177.6 | 477.0% |
> |   256 |  2 | 538.6 |1933.2 | 258.9% |
> |   256 |  4 | 427.2 |3278.1 | 667.3% |
> |   256 |  8 | 420.8 |2272.3 | 440.0% |
> |   256 | 16 | 294.0 |1311.2 | 346.0% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |   512 |  1 | 553.4 |3690.4 | 566.9% |
> |   512 |  2 | 455.6 |4974.1 | 991.7% |
> |   512 |  4 | 494.2 |3406.4 | 589.2% |
> |   512 |  8 | 431.4 |2257.0 | 423.2% |
> |   512 | 16 | 316.3 |1272.0 | 302.2% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |  1024 |  1 | 566.1 |3520.0 | 521.8% |
> |  1024 |  2 | 508.7 |4437.4 | 772.3% |
> |  1024 |  4 | 520.7 |3422.6 | 557.4% |
> |  1024 |  8 | 501.8 |2124.7 | 323.4% |
> |  1024 | 16 | 340.6 |1305.0 | 283.2% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |  2048 |  1 | 535.1 |5438.5 | 916.4% |
> |  2048 |  2 | 537.3 |4668.3 | 768.8% |
> |  2048 |  4 | 529.2 |2417.2 | 356.7% |
> |  2048 |  8 | 485.1 |2249.8 | 363.8% |
> |  2048 | 16 | 334.3 |1265.7 | 278.6% |
> |  bpc  | #T || PureJava ||   Native | % diff |
> |  4096 |  1 | 563.0 |7264.0 | 1190.1% |
> |  4096 |  2 | 538.8 |5681.4 | 954.4% |
> |  4096 |  4 | 528.9 |3107.6 | 487.5% |
> |  4096 |  8 | 521.8 |

  1   2   >