[jira] [Updated] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12457:
---
Assignee: Akira AJISAKA
Target Version/s: 2.8.0
  Status: Patch Available  (was: Reopened)

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12509) org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12509:

Attachment: Hadoop-common-trunk-Java8 #594 test - testKeyACLs [Jenkins].pdf

> org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing
> ---
>
> Key: HADOOP-12509
> URL: https://issues.apache.org/jira/browse/HADOOP-12509
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12509-001.patch, Hadoop-common-trunk-Java8 #594 
> test - testKeyACLs [Jenkins].pdf
>
>
> Failure of Jenkins in trunk, test 
> {{org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12513) Dockerfile lacks initial `apt-get update`

2015-10-26 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated HADOOP-12513:
--
Status: Patch Available  (was: Open)

> Dockerfile lacks initial `apt-get update`
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12513) Dockerfile lacks initial `apt-get update`

2015-10-26 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated HADOOP-12513:
--
Attachment: HADOOP-12513.patch

> Dockerfile lacks initial `apt-get update`
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973815#comment-14973815
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Steve, thank you for taking a look.

{quote}
Why so many duplicate guice servlet context classes?
{quote}

It's for sharing an object of com.google.inject.Injector within test classes. 
The test cases which extends JerseyTest are initialized in its constructor, 
which accepts a guice servlet context class.
{code}
  public TestAMWebServices() {
super(new WebAppDescriptor.Builder(
"org.apache.hadoop.mapreduce.v2.app.webapp")
.contextListenerClass(GuiceServletConfig.class) // <-- servlet context 
class
.filterClass(com.google.inject.servlet.GuiceFilter.class)
.contextPath("jersey-guice-filter").servletPath("/").build());
  }
{code}

The guice servlet context classes are used to initialize JerseyTest and servlet 
containers with guice's DI: 
{code}
private Injector injector = Guice.createInjector(new ServletModule() {
  @Override
  protected void configureServlets() {
   appContext = new MockAppContext(0, 1, 1, 1);
   appContext.setBlacklistedNodes(Sets.newHashSet("badnode1", "badnode2"));
   bind(JAXBContextResolver.class);
   bind(AMWebServices.class);
   bind(GenericExceptionHandler.class);
 
   serve("/*").with(GuiceContainer.class);
  }
}
 
public class GuiceServletConfig extends GuiceServletContextListener {
  @Override
  protected Injector getInjector() {
return injector;
  }
}
{code}

The latest patch fixes test failures by changes of the initialization sequence 
after upgrading jersey-test-framework-grizzly2 to 1.13 or later.  It happens 
because grizzly2 has started to use reflection since 2.2.16 and the logic to 
create ServletModule in Webappcontext doesn't handle [formal 
parameter|https://issues.apache.org/jira/browse/HADOOP-9613?focusedCommentId=14573457=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14573457]
 of inner class. 

I thought it's better to make these inner classes static, but it happens test 
failures because some test changes states of servlets and the states remain 
still after executing tests. That's why I changed GuiceServletConfig to normal 
class(not inner class) and be able to initialize its module for each test cases 
as follows.
{code}
  static {
GuiceServletConfig.injector = Guice.createInjector(new WebServletModule());
  }

  @Before
  @Override
  public void setUp() throws Exception {
super.setUp();
GuiceServletConfig.injector = Guice.createInjector(new WebServletModule());
  }
{code}]

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973988#comment-14973988
 ] 

Steve Loughran commented on HADOOP-12457:
-

this might be a locale issue. 

{code}
* by searching for 68โ€“5โ€“99.7 rule. We flag an RPC as slow RPC
{code}

Looking at the line, I think they are long hyphens "โ€”" not minus signs "-"

If you supply a patch to change the symbols I'll +1 it

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-12457:
-

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973992#comment-14973992
 ] 

Hadoop QA commented on HADOOP-12040:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 4 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
57s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 54s 
{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk cannot run 
convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 8s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 55s 
{color} | {color:red} Patch generated 4 new checkstyle issues in root (total 
was 118, now 119). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 10s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 42s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 53m 37s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 27s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 9s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 50m 20s 
{color} | {color:green} hadoop-hdfs in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-hdfs-client in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 56 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 169m 8s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | 
hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | 

[jira] [Commented] (HADOOP-12509) org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973997#comment-14973997
 ] 

Steve Loughran commented on HADOOP-12509:
-

That's part of the problem: the stack trace was missing. Hence the patch.

The jenkins run which was saw this was on saturday; it seems to have been 
deleted already. I have the page in my browser, so can get a PDF of it, which 
I'm about to attach.

As to fixing it, yes. I think getting this patch in would be a start

> org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing
> ---
>
> Key: HADOOP-12509
> URL: https://issues.apache.org/jira/browse/HADOOP-12509
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12509-001.patch
>
>
> Failure of Jenkins in trunk, test 
> {{org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-12457:
---
Attachment: HADOOP-12457.00.patch

Thanks [~ozawa] for the report and thank [~ste...@apache.org] for the comment. 
Attaching a patch.

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
> Attachments: HADOOP-12457.00.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12513) Dockerfile lacks initial `apt-get update`

2015-10-26 Thread Akihiro Suda (JIRA)
Akihiro Suda created HADOOP-12513:
-

 Summary: Dockerfile lacks initial `apt-get update`
 Key: HADOOP-12513
 URL: https://issues.apache.org/jira/browse/HADOOP-12513
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akihiro Suda
Priority: Trivial
 Attachments: HADOOP-12513.patch

[Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
 executes {{apt-get install -y software-properties-common}} without an initial 
{{apt-get update}}.

This can fail depending on the local Docker build cache.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12040:
---
Attachment: HADOOP-12040-v2.patch

Rebased the patch on trunk.

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: HDFS-7285
>
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11400) GraphiteSink does not reconnect to Graphite after 'broken pipe'

2015-10-26 Thread Manish Malhotra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973870#comment-14973870
 ] 

Manish Malhotra commented on HADOOP-11400:
--

Thanks Kamil for the patch !! 

Though I have few questions regarding MetricSystem in general (so may be 
someone can also please comment), and for GraphiteSink : 

1. Case, target Graphite Server is down, or not reachable: 

In case of putMetrics throws exception, client gets "broken pipe" exception 
which goes till MetricSinkAdapter. 
Then Hadoop MetricSystem in general has notion of retry and wait-time, but 
after retry it just stop using the errored Sink.
Even after few mins if the target server is up, it doesn't try to publish 
metric again. 

I tested this scenario with GraphiteSink.

My understanding of this flow, might be wrong :), but if not then I think best 
place to fix this in the MetricSystem framework, and it should check the sinks 
availability after every few secs / mins based on the config. 


2. GraphiteSink is using single TCP connection, should it uses a TCP connection 
pool? 

Regards,
Manish

> GraphiteSink does not reconnect to Graphite after 'broken pipe'
> ---
>
> Key: HADOOP-11400
> URL: https://issues.apache.org/jira/browse/HADOOP-11400
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Affects Versions: 2.5.1, 2.6.0
>Reporter: Kamil Gorlo
>Assignee: Kamil Gorlo
> Fix For: 2.7.0
>
> Attachments: HADOOP-11400.patch
>
>
> I see that after network error GraphiteSink does not reconnects to Graphite 
> server and in effect metrics are not sent. 
> Here is stacktrace I see (this is from nodemanager):
> 2014-12-11 16:39:21,655 ERROR 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception, retry 
> in 4806ms
> org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> Caused by: java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
> at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
> at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
> at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
> at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:118)
> ... 5 more
> 2014-12-11 16:39:26,463 ERROR 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter: Got sink exception and 
> over retry limit, suppressing further error messages
> org.apache.hadoop.metrics2.MetricsException: Error flushing metrics
> at 
> org.apache.hadoop.metrics2.sink.GraphiteSinkFixed.flush(GraphiteSinkFixed.java:120)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:184)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.consume(MetricsSinkAdapter.java:43)
> at 
> org.apache.hadoop.metrics2.impl.SinkQueue.consumeAll(SinkQueue.java:87)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter.publishMetricsFromQueue(MetricsSinkAdapter.java:129)
> at 
> org.apache.hadoop.metrics2.impl.MetricsSinkAdapter$1.run(MetricsSinkAdapter.java:88)
> Caused by: java.net.SocketException: Broken pipe
> at java.net.SocketOutputStream.socketWrite0(Native Method)
> at 
> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113)
> at java.net.SocketOutputStream.write(SocketOutputStream.java:159)
> at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
> at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
> at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
> at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
> at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
> at 
> 

[jira] [Commented] (HADOOP-12513) Dockerfile lacks initial `apt-get update`

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14973800#comment-14973800
 ] 

Hadoop QA commented on HADOOP-12513:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
28s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 52s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-26 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768662/HADOOP-12513.patch |
| JIRA Issue | HADOOP-12513 |
| Optional Tests |  asflicense  shellcheck  |
| uname | Linux 382b5e3eddbd 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 1aa735c |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| shellcheck | v0.4.1 |
| Max memory used | 32MB |
| Powered by | Apache Yetus   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7931/console |


This message was automatically generated.



> Dockerfile lacks initial `apt-get update`
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12040:
---
Fix Version/s: (was: HDFS-7285)

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12512) hadoop fs -ls / fails with below error when we use Custom -Dhadoop.root.logger

2015-10-26 Thread Prabhu Joseph (JIRA)
Prabhu Joseph created HADOOP-12512:
--

 Summary: hadoop fs -ls / fails with below error when we use Custom 
-Dhadoop.root.logger 
 Key: HADOOP-12512
 URL: https://issues.apache.org/jira/browse/HADOOP-12512
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.7.0
Reporter: Prabhu Joseph



hadoop fs -ls / fails with below error when we use Custom -Dhadoop.root.logger 
and creates Configuration object and adds defaultResource custom-conf.xml with 
quiet = false.
custom-conf.xml is optional configuration.

Exception in thread "main" java.lang.RuntimeException: custom-conf.xml not found
at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2612)
at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2531)
at
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2444)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1156)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1128)
at
org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1464)
at
org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)
at
org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)
at
org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:170)
at
org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)



ISSUE:
##

There is a Logic issue in Configuration class and defaultResources list.

Configuration is shared by classes. Configuration has a shared list of default 
resources added by the classes.

If class A wants x,y,z resources and says all are optional using quiet = false, 
Configuration loads if they are present else skips and adds them to list.

Now shared list i.e defaultResources has x,y,z

Now if class B wants x resource and says it as mandatory , loadResources scans 
the entire list and treats them mandatory. So during scan of y, it will fail.

where A is Custom Class
and B is FsShell

FsShell checks for custom-conf.xml and treats it mandatory and fails.

1. The mandatory/optional has to be resource wise. [OR]
2. defaultResources should not be shared. 

Both of them looks complex. And simple fix is the below.

1. when loadResource skips initially if resource not found, it has to remove 
the entry from defaultResource list as well. There is no use in having a 
resource 
which is not at classpath in the list.


CODE CHANGE:  class org.apache.hadoop.conf.Configuration


private Resource loadResource(Properties properties, Resource wrapper, boolean
quiet) {}
 ..

 if (root == null) {
if (doc == null) {
  if (quiet) {
defaultResources.remove(resource);  // FIX: During skip, remove
Resource from shared list 
return null;
  }
throw new RuntimeException(resource + " not found");
}
 root = doc.getDocumentElement();
 }

 


Tested after code fix, rans successfully.







--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974855#comment-14974855
 ] 

Steve Loughran commented on HADOOP-12457:
-

+1

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11149) TestZKFailoverController times out

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11149:

Attachment: HADOOP-11149-002.patch

patch -002 adds the check for null in the {{stop()}} method

> TestZKFailoverController times out
> --
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.0, 2.5.1, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 ยป  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12385:

Status: Open  (was: Patch Available)

> include nested stack trace in SaslRpcClient.getServerToken()
> 
>
> Key: HADOOP-12385
> URL: https://issues.apache.org/jira/browse/HADOOP-12385
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12385-001.patch, HADOOP-12385-002.patch
>
>
> The {{SaslRpcClient.getServerToken()}} method loses the stack traces when an 
> attempt to instantiate a {{TokenSelector}}. It should include them in the 
> generated exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12501) Enable SwiftNativeFileSystem to preserve user, group, permission

2015-10-26 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975120#comment-14975120
 ] 

Chen He commented on HADOOP-12501:
--

Thank you for the suggestion, [~steve_l]

"though I think it's dangerous as people may thing those permissions may 
actually apply. "

Actually, I have another idea to enable swift driver to do permission check, 
then the blobstore looks more like a real filesystem. The idea about changing 
'distcp' is a great solution. IMHO, it could be more helpful if we find a way 
to let the '-p' option works for all filesystem implementations.  

> Enable SwiftNativeFileSystem to preserve user, group, permission
> 
>
> Key: HADOOP-12501
> URL: https://issues.apache.org/jira/browse/HADOOP-12501
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Chen He
>Assignee: Chen He
>
> Currently, if user copy file/dir from localFS or HDFS to swift object store, 
> u/g/p will be gone. There should be a way to preserve u/g/p. It will provide 
> benefit for  a large number of files/dirs transferring between HDFS/localFS 
> and Swift object store. We also need to be careful since Hadoop prevent 
> general user from changing u/g/p especially if Kerberos is enabled.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Commented] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975457#comment-14975457
 ] 

Kai Zheng commented on HADOOP-12040:


[~hitliuyi] and [~zhz], would you help review this one? It is highly relied by 
ISA-L and new Java coders. Thanks.

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12513) Dockerfile lacks initial 'apt-get update'

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974845#comment-14974845
 ] 

Hudson commented on HADOOP-12513:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #539 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/539/])
HADOOP-12513. Dockerfile lacks initial 'apt-get update'. Contributed by (ozawa: 
rev 123b3db743a86aa18e46ec44a08f7b2e7c7f6350)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/docker/Dockerfile


> Dockerfile lacks initial 'apt-get update'
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Assignee: Akihiro Suda
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12472:

Status: Patch Available  (was: Open)

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Summary: Make static fields in GenericTestUtils for 
assertExceptionContains() package-private and final  (was: Make )

> Make static fields in GenericTestUtils for assertExceptionContains() 
> package-private and final
> --
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Status: Open  (was: Patch Available)

> Make 
> -
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Attachment: HADOOP-12514.000.patch

> Make 
> -
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Attachment: (was: HADOOP-12514.000.patch)

> Make 
> -
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975003#comment-14975003
 ] 

Steve Loughran commented on HADOOP-12038:
-

assigning to [~airbots] as I have no time to work on this

> SwiftNativeOutputStream should check whether a file exists or not before 
> deleting
> -
>
> Key: HADOOP-12038
> URL: https://issues.apache.org/jira/browse/HADOOP-12038
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-12038.000.patch
>
>
> 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
> /tmp/hadoop-root/output-3695386887711395289.tmp
> It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975066#comment-14975066
 ] 

Jing Zhao commented on HADOOP-12472:


oops, have not seen this comment before committing the patch. Maybe we can do 
this as a follow on?

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8602) Passive mode support for FTPFileSystem

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975116#comment-14975116
 ] 

Hadoop QA commented on HADOOP-8602:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 51s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 38s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 0s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 45s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 31s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 49m 46s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.ipc.TestDecayRpcScheduler |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-26 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12749855/HADOOP-8602.009.patch 
|
| JIRA Issue | HADOOP-8602 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  xml  |
| uname | Linux 47dd7181962a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Created] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-12514:
--

 Summary: Make 
 Key: HADOOP-12514
 URL: https://issues.apache.org/jira/browse/HADOOP-12514
 Project: Hadoop Common
  Issue Type: Task
Reporter: Mingliang Liu
Assignee: Mingliang Liu
Priority: Minor


This is a follow up of [HADOOP-12472].

It makes sense to make the following static fields package private instead of 
protected, as they are for test purpose (protected keyword makes more sense to 
sub-classes).

-  protected static String E_NULL_THROWABLE = "Null Throwable";
-  protected static String E_NULL_THROWABLE_STRING = "Null Throwable.toString() 
value";
-  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
exception";

Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Status: Patch Available  (was: Open)

> Make static fields in GenericTestUtils for assertExceptionContains() 
> package-private and final
> --
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975183#comment-14975183
 ] 

Andrew Wang commented on HADOOP-12472:
--

Yep sounds good, let's take it to the follow-on JIRA.

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12507) Move retry policy to hadoop-common-client

2015-10-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974848#comment-14974848
 ] 

Jing Zhao commented on HADOOP-12507:


+1 on the patch.

We need to update the title and description before committing to the feature 
branch. The patch also moves the Writable part. Also only the retry policy 
interface is moved, the real retry policy implementation has not been moved 
yet. Since we're doing the work in the feature branch, we do continue the work 
in separate jiras.

> Move retry policy to hadoop-common-client
> -
>
> Key: HADOOP-12507
> URL: https://issues.apache.org/jira/browse/HADOOP-12507
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-12507.HADOOP-12499.001.patch
>
>
> The retry policy is used by both HDFS and YARN clients to implement 
> client-side HA failover. This jira proposes to move them to the 
> hadoop-common-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12508) delete fails with exception when lease is held on blob

2015-10-26 Thread Gaurav Kanade (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974999#comment-14974999
 ] 

Gaurav Kanade commented on HADOOP-12508:


[~cnauroth]

Yes this could be potentially an improvement to break the lease explicitly. As 
of now I have tried to follow the principle of minimum disruption to existing 
code. As can be seen we never hit this scenario until this new HDP 2.3 test was 
written though the scenario was valid since the beginning. So it might be 
pretty rare case. 

I believe in the light of the recent weeks experience we plan to take a re-look 
at the WASB driver code from point of view of robustness to concurrency process 
and test coverage; at that time we can revisit this issue.

Meanwhile on the test front I am planning to write a test that at least 
partially validates this patch - i.e. tests that we are catching the exception 
appropriately and submitting a new patch shortly. We still will not be able to 
test the handling part as this will require extra infra.

Adding [~onpduo], [~pravinmittal], [~linchan] for their thoughts on your 
suggestion.

> delete fails with exception when lease is held on blob
> --
>
> Key: HADOOP-12508
> URL: https://issues.apache.org/jira/browse/HADOOP-12508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Blocker
> Attachments: HADOOP-12508.01.patch, HADOOP-12508.02.patch
>
>
> The delete function as implemented by AzureNativeFileSystem store attempts 
> delete without a lease. In most cases this works but in the case of a 
> dangling lease resulting out of say a process killed and leaving a lease 
> dangling for a small period a delete attempted during this period simply 
> crashes. This fix addresses the situation by re-attempting the delete after a 
> lease acqusition in this case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12038:

Assignee: Chen He  (was: Steve Loughran)

> SwiftNativeOutputStream should check whether a file exists or not before 
> deleting
> -
>
> Key: HADOOP-12038
> URL: https://issues.apache.org/jira/browse/HADOOP-12038
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
> Attachments: HADOOP-12038.000.patch
>
>
> 15/05/27 15:27:03 WARN snative.SwiftNativeOutputStream: Could not delete 
> /tmp/hadoop-root/output-3695386887711395289.tmp
> It should check whether the file exists or not before deleting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12487) DomainSocket.close() assumes incorrect Linux behaviour

2015-10-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975022#comment-14975022
 ] 

Colin Patrick McCabe commented on HADOOP-12487:
---

bq. The /dev/null & dup2 mechanism works on Solaris but it doesn't work on 
Linux unfortunately, due to the brokenness in the handling of close() on a 
socket that's being used in an accept().

Thanks for the link.  Very interesting discussion.  It's usually a good idea to 
put background info on jiras in case the hyperlink goes away in the future.  To 
summarize the Linux kernel discussion in the thread you linked, it sounds like 
{{close}} does not break out of {{accept}} on Linux, but {{shutdown}} does.  
You refer to that as "brokenness," but some other people (including Eric 
Duzamet and Al Viro) give reasons for the behavior and defend it.

bq. However having said that the /dev/null & dup2 mechanism works on Solaris I 
can't come up with a race scenario where it's actually needed. DomainSocket 
encapsulates the underlying FD and DomainSocket invalidates itself on close, so 
I can't see how the FD can actually be used for anything, even if it is reused 
by an open operation in a different thread. If you can come up with a scenario 
involving DomainSocket then I'll investigate, thanks.

We have multiple threads operating on the same {{DomainSocket}} at once.  If 
thread #1 closes the domain socket, releasing the file descriptor number, 
thread #2 in a separate, non-unix-file-descriptor thread opens a new 
descriptor, and thread #3 does something with the old file descriptor number, 
thread #3 may stomp on thread #2.  I thought you understood this race based on 
your comment here: 
https://issues.apache.org/jira/browse/HADOOP-12487?focusedCommentId=14964205=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14964205

> DomainSocket.close() assumes incorrect Linux behaviour
> --
>
> Key: HADOOP-12487
> URL: https://issues.apache.org/jira/browse/HADOOP-12487
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: 2.7.1
> Environment: Linux Solaris
>Reporter: Alan Burlison
>Assignee: Alan Burlison
> Attachments: shutdown.c
>
>
> I'm getting a test failure in TestDomainSocket.java, in the 
> testSocketAcceptAndClose test. That test creates a socket which one thread 
> waits on in DomainSocket.accept() whilst a second thread sleeps for a short 
> time before closing the same socket with DomainSocket.close().
> DomainSocket.close() first calls shutdown0() on the socket before closing 
> close0() - both those are thin wrappers around the corresponding libc socket 
> calls. DomainSocket.close() contains the following comment, explaining the 
> logic involved:
> {code}
>   // Calling shutdown on the socket will interrupt blocking system
>   // calls like accept, write, and read that are going on in a
>   // different thread.
> {code}
> Unfortunately that relies on non-standards-compliant Linux behaviour. I've 
> written a simple C test case that replicates the scenario above:
> # ThreadA opens, binds, listens and accepts on a socket, waiting for 
> connections.
> # Some time later ThreadB calls shutdown on the socket ThreadA is waiting in 
> accept on.
> Here is what happens:
> On Linux, the shutdown call in ThreadB succeeds and the accept call in 
> ThreadA returns with EINVAL.
> On Solaris, the shutdown call in ThreadB fails and returns ENOTCONN. ThreadA 
> continues to wait in accept.
> Relevant POSIX manpages:
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/accept.html
> http://pubs.opengroup.org/onlinepubs/9699919799/functions/shutdown.html
> The POSIX shutdown manpage says:
> "The shutdown() function shall cause all or part of a full-duplex connection 
> on the socket associated with the file descriptor socket to be shut down."
> ...
> "\[ENOTCONN] The socket is not connected."
> Page 229 & 303 of "UNIX System V Network Programming" say:
> "shutdown can only be called on sockets that have been previously connected"
> "The socket \[passed to accept that] fd refers to does not participate in the 
> connection. It remains available to receive further connect indications"
> That is pretty clear, sockets being waited on with accept are not connected 
> by definition. Nor is it the accept socket connected when a client connects 
> to it, it is the socket returned by accept that is connected to the client. 
> Therefore the Solaris behaviour of failing the shutdown call is correct.
> In order to get the required behaviour of ThreadB causing ThreadA to exit the 
> accept call with an error, the correct way is for ThreadB to call close on 
> the socket that ThreadA is waiting on in accept.
> On Solaris, calling close in 

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: Patch Available  (was: In Progress)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Status: In Progress  (was: Patch Available)

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Commented] (HADOOP-12451) Setting HADOOP_HOME explicitly should be allowed

2015-10-26 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975130#comment-14975130
 ] 

Karthik Kambatla commented on HADOOP-12451:
---

We should do an addendum. I ll try to get to this this week. 

> Setting HADOOP_HOME explicitly should be allowed
> 
>
> Key: HADOOP-12451
> URL: https://issues.apache.org/jira/browse/HADOOP-12451
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 2.7.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
>Priority: Blocker
> Fix For: 2.7.2
>
> Attachments: HADOOP-12451-branch-2.1.patch
>
>
> HADOOP-11464 reinstates cygwin support. In the process, it sets HADOOP_HOME 
> explicitly in hadoop-config.sh without checking if it has already been set. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975150#comment-14975150
 ] 

Mingliang Liu commented on HADOOP-12472:


Thanks [~andrew.wang] and [~jingzhao] for pointing this out.
Yes it makes sense to make them package-private as they're for test purpose. I 
filed [HADOOP-12514] to track this.

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Status: Patch Available  (was: Open)

> Make 
> -
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Attachment: HADOOP-12514.000.patch

> Make 
> -
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose (protected keyword makes more sense 
> to sub-classes).
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11149) TestZKFailoverController times out

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11149:

Status: Patch Available  (was: Open)

> TestZKFailoverController times out
> --
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.1, 2.5.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch, HADOOP-11149-002.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 ยป  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12385:

Attachment: HADOOP-12385-003.patch

Patch -003; contains chris's suggestion

> include nested stack trace in SaslRpcClient.getServerToken()
> 
>
> Key: HADOOP-12385
> URL: https://issues.apache.org/jira/browse/HADOOP-12385
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12385-001.patch, HADOOP-12385-002.patch, 
> HADOOP-12385-003.patch
>
>
> The {{SaslRpcClient.getServerToken()}} method loses the stack traces when an 
> attempt to instantiate a {{TokenSelector}}. It should include them in the 
> generated exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975057#comment-14975057
 ] 

Andrew Wang commented on HADOOP-12472:
--

We could make the static strings package-private and final right? Otherwise 
LGTM, +1 pending that.

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-12472:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1. I've committed this to trunk and branch-2. Thanks for the contribution, 
[~ste...@apache.org]!

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12366) expose calculated paths

2015-10-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12366:
--
Affects Version/s: 3.0.0

> expose calculated paths
> ---
>
> Key: HADOOP-12366
> URL: https://issues.apache.org/jira/browse/HADOOP-12366
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-12366.00.patch
>
>
> It would be useful for 3rd party apps to know the locations of things when 
> hadoop is running without explicit path env vars set.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12472:

Status: Open  (was: Patch Available)

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12178) NPE during handling of SASL setup if problem with SASL resolver class

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974938#comment-14974938
 ] 

Steve Loughran commented on HADOOP-12178:
-

I'm happy with being robustly paranoid here. We have hit problems with nothing 
resembling a stack trace to track this down, and getting this patch in should 
be enough to stop that hitting anyone else.

> NPE during handling of SASL setup if problem with SASL resolver class
> -
>
> Key: HADOOP-12178
> URL: https://issues.apache.org/jira/browse/HADOOP-12178
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12178-001.patch
>
>
> If there's any problem in the constructor of {{SaslRpcClient}}, then IPC 
> Client throws an NPE rather than forwarding the stack trace. This is because 
> the exception handler assumes that {{saslRpcClient}} is not null, that the 
> exception is related to the SASL setup itself.
> The exception handler needs to check for {{saslRpcClient}} being null, and if 
> so, rethrow the exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974937#comment-14974937
 ] 

Colin Patrick McCabe commented on HADOOP-9613:
--

Looks like a good idea.  I assume you are targetting this only at trunk / 3.0 
based on the "target version" and the incompatibility discussion?

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.1.patch, HADOOP-9613.2.patch, HADOOP-9613.3.patch, 
> HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12507) Move retry policy and writable interfaces to hadoop-common-client

2015-10-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-12507:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: HADOOP-12499
   Status: Resolved  (was: Patch Available)

I've committed the patch to the HADOOP-12499 branch. Thanks Mingliang and Jing 
for the reviews.

> Move retry policy and writable interfaces to hadoop-common-client
> -
>
> Key: HADOOP-12507
> URL: https://issues.apache.org/jira/browse/HADOOP-12507
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Fix For: HADOOP-12499
>
> Attachments: HADOOP-12507.HADOOP-12499.001.patch
>
>
> The retry policy is used by both HDFS and YARN clients to implement 
> client-side HA failover. This jira proposes to move them to the 
> hadoop-common-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Duo Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Xu updated HADOOP-11685:

Attachment: HADOOP-11685.05.patch

> StorageException complaining " no lease ID" during HBase distributed log 
> splitting
> --
>
> Key: HADOOP-11685
> URL: https://issues.apache.org/jira/browse/HADOOP-11685
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-11685.01.patch, HADOOP-11685.02.patch, 
> HADOOP-11685.03.patch, HADOOP-11685.04.patch, HADOOP-11685.05.patch
>
>
> This is similar to HADOOP-11523, but in a different place. During HBase 
> distributed log splitting, multiple threads will access the same folder 
> called "recovered.edits". However, lots of places in our WASB code did not 
> acquire lease and simply passed null to Azure storage, which caused this 
> issue.
> {code}
> 2015-02-26 03:21:28,871 WARN 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of 
> WALs/workernode4.hbaseproddm2001.g6.internal.cloudapp.net,60020,1422071058425-splitting/workernode4.hbaseproddm2001.g6.internal.cloudapp.net%2C60020%2C1422071058425.1424914216773
>  failed, returning error
> java.io.IOException: org.apache.hadoop.fs.azure.AzureException: 
> java.io.IOException
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.checkForErrors(HLogSplitter.java:633)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.access$000(HLogSplitter.java:121)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$OutputSink.finishWriting(HLogSplitter.java:964)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.finishWritingAndClose(HLogSplitter.java:1019)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:359)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:223)
>   at 
> org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:142)
>   at 
> org.apache.hadoop.hbase.regionserver.handler.HLogSplitterHandler.process(HLogSplitterHandler.java:79)
>   at 
> org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:128)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.hadoop.fs.azure.AzureException: java.io.IOException
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1477)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1862)
>   at 
> org.apache.hadoop.fs.azurenative.NativeAzureFileSystem.mkdirs(NativeAzureFileSystem.java:1812)
>   at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getRegionSplitEditsPath(HLogSplitter.java:502)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.createWAP(HLogSplitter.java:1211)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.getWriterAndPath(HLogSplitter.java:1200)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$LogRecoveredEditsOutputSink.append(HLogSplitter.java:1243)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.writeBuffer(HLogSplitter.java:851)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.doRun(HLogSplitter.java:843)
>   at 
> org.apache.hadoop.hbase.regionserver.wal.HLogSplitter$WriterThread.run(HLogSplitter.java:813)
> Caused by: java.io.IOException
>   at 
> com.microsoft.windowsazure.storage.core.Utility.initIOException(Utility.java:493)
>   at 
> com.microsoft.windowsazure.storage.blob.BlobOutputStream.close(BlobOutputStream.java:282)
>   at 
> org.apache.hadoop.fs.azurenative.AzureNativeFileSystemStore.storeEmptyFolder(AzureNativeFileSystemStore.java:1472)
>   ... 10 more
> Caused by: com.microsoft.windowsazure.storage.StorageException: There is 
> currently a lease on the blob and no lease ID was specified in the request.
>   at 
> com.microsoft.windowsazure.storage.StorageException.translateException(StorageException.java:163)
>   at 
> com.microsoft.windowsazure.storage.core.StorageRequest.materializeException(StorageRequest.java:306)
>   at 
> com.microsoft.windowsazure.storage.core.ExecutionEngine.executeWithRetry(ExecutionEngine.java:229)
>   at 
> 

[jira] [Updated] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12385:

Status: Patch Available  (was: Open)

> include nested stack trace in SaslRpcClient.getServerToken()
> 
>
> Key: HADOOP-12385
> URL: https://issues.apache.org/jira/browse/HADOOP-12385
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12385-001.patch, HADOOP-12385-002.patch, 
> HADOOP-12385-003.patch
>
>
> The {{SaslRpcClient.getServerToken()}} method loses the stack traces when an 
> attempt to instantiate a {{TokenSelector}}. It should include them in the 
> generated exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974994#comment-14974994
 ] 

Steve Loughran commented on HADOOP-12472:
-

test failure is spurious

> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12509) org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing

2015-10-26 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12509?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975065#comment-14975065
 ] 

Daniel Templeton commented on HADOOP-12509:
---

Thanks for the test output.  I really see no reason for this test to have 
failed.

On your patch, I would recommend adding:

{code}
Assert.assertTrue("KeyProvider was created with existing keys",
kp.getKeys().isEmpty());
{code}

after the KP creation.

Also, it seems kinda wrong to fix just one of the tests.  I've started a round 
of cleanup on the whole of TestKMS.  Seems like we should file a separate JIRA 
to clean up the tests, and leave this one for actually fixing the issue, should 
we ever track it down.

> org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs failing
> ---
>
> Key: HADOOP-12509
> URL: https://issues.apache.org/jira/browse/HADOOP-12509
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
> Environment: ASF Jenkins
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-12509-001.patch, Hadoop-common-trunk-Java8 #594 
> test - testKeyACLs [Jenkins].pdf
>
>
> Failure of Jenkins in trunk, test 
> {{org.apache.hadoop.crypto.key.kms.server.TestKMS.testKeyACLs}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975232#comment-14975232
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1323 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1323/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975464#comment-14975464
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


You're right. 

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-9613:
---
Attachment: HADOOP-9613.006.incompatible.patch

Attaching a patch for fixing a failure of TestTimelineClient.

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12507) Move retry policy and writable interfaces to hadoop-common-client

2015-10-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-12507:

Summary: Move retry policy and writable interfaces to hadoop-common-client  
(was: Move retry policy to hadoop-common-client)

> Move retry policy and writable interfaces to hadoop-common-client
> -
>
> Key: HADOOP-12507
> URL: https://issues.apache.org/jira/browse/HADOOP-12507
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Haohui Mai
>Assignee: Haohui Mai
> Attachments: HADOOP-12507.HADOOP-12499.001.patch
>
>
> The retry policy is used by both HDFS and YARN clients to implement 
> client-side HA failover. This jira proposes to move them to the 
> hadoop-common-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12513) Dockerfile lacks initial 'apt-get update'

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974910#comment-14974910
 ] 

Hudson commented on HADOOP-12513:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2476 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2476/])
HADOOP-12513. Dockerfile lacks initial 'apt-get update'. Contributed by (ozawa: 
rev 123b3db743a86aa18e46ec44a08f7b2e7c7f6350)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/docker/Dockerfile


> Dockerfile lacks initial 'apt-get update'
> -
>
> Key: HADOOP-12513
> URL: https://issues.apache.org/jira/browse/HADOOP-12513
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akihiro Suda
>Assignee: Akihiro Suda
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-12513.patch
>
>
> [Dockerfile|https://github.com/apache/hadoop/blob/1aa735c188a308ca608694546c595e3c51f38612/dev-support/docker/Dockerfile#l27]
>  executes {{apt-get install -y software-properties-common}} without an 
> initial {{apt-get update}}.
> This can fail depending on the local Docker build cache.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11149) TestZKFailoverController times out

2015-10-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11149:

Status: Open  (was: Patch Available)

> TestZKFailoverController times out
> --
>
> Key: HADOOP-11149
> URL: https://issues.apache.org/jira/browse/HADOOP-11149
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 2.5.1, 2.5.0, 3.0.0
> Environment: Jenkins
>Reporter: Rajat Jain
>Assignee: Steve Loughran
> Attachments: HADOOP-11149-001.patch
>
>
> {code}
> Running org.apache.hadoop.ha.TestZKFailoverController
> Tests run: 19, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 56.875 sec 
> <<< FAILURE! - in org.apache.hadoop.ha.TestZKFailoverController
> testGracefulFailover(org.apache.hadoop.ha.TestZKFailoverController)  Time 
> elapsed: 25.045 sec  <<< ERROR!
> java.lang.Exception: test timed out after 25000 milliseconds
>   at java.lang.Object.wait(Native Method)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.waitForActiveAttempt(ZKFailoverController.java:467)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.doGracefulFailover(ZKFailoverController.java:657)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.access$400(ZKFailoverController.java:61)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:602)
>   at 
> org.apache.hadoop.ha.ZKFailoverController$3.run(ZKFailoverController.java:599)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1621)
>   at 
> org.apache.hadoop.ha.ZKFailoverController.gracefulFailoverToYou(ZKFailoverController.java:599)
>   at 
> org.apache.hadoop.ha.ZKFCRpcServer.gracefulFailover(ZKFCRpcServer.java:94)
>   at 
> org.apache.hadoop.ha.TestZKFailoverController.testGracefulFailover(TestZKFailoverController.java:448)
> Results :
> Tests in error:
>   TestZKFailoverController.testGracefulFailover:448->Object.wait:-2 ยป  test 
> time...
> {code}
> Running on centos6.5



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12501) Enable SwiftNativeFileSystem to preserve user, group, permission

2015-10-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14974973#comment-14974973
 ] 

Steve Loughran commented on HADOOP-12501:
-

I see this use case, though I think it's dangerous as people may thing those 
permissions may actually apply. There's the other issue that HDFS permissions 
are all ACL based; you need to store more than you think.

In unix-land, usually it's left to {{tar}} to do the permissions tracking. I 
don't think we quite have the equivalent in HDFS, do we? Maybe you'd want to do 
it differently, and have distcp build up an index file, listing every file and 
its permissions, which could then be applied to the recovered data. That way, 
you'd have a backup story that worked with any back end FS

> Enable SwiftNativeFileSystem to preserve user, group, permission
> 
>
> Key: HADOOP-12501
> URL: https://issues.apache.org/jira/browse/HADOOP-12501
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/swift
>Affects Versions: 2.7.1
>Reporter: Chen He
>Assignee: Chen He
>
> Currently, if user copy file/dir from localFS or HDFS to swift object store, 
> u/g/p will be gone. There should be a way to preserve u/g/p. It will provide 
> benefit for  a large number of files/dirs transferring between HDFS/localFS 
> and Swift object store. We also need to be careful since Hadoop prevent 
> general user from changing u/g/p especially if Kerberos is enabled.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12508) delete fails with exception when lease is held on blob

2015-10-26 Thread Gaurav Kanade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gaurav Kanade updated HADOOP-12508:
---
Attachment: HADOOP-12508.03.patch

> delete fails with exception when lease is held on blob
> --
>
> Key: HADOOP-12508
> URL: https://issues.apache.org/jira/browse/HADOOP-12508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Blocker
> Attachments: HADOOP-12508.01.patch, HADOOP-12508.02.patch, 
> HADOOP-12508.03.patch
>
>
> The delete function as implemented by AzureNativeFileSystem store attempts 
> delete without a lease. In most cases this works but in the case of a 
> dangling lease resulting out of say a process killed and leaving a lease 
> dangling for a small period a delete attempted during this period simply 
> crashes. This fix addresses the situation by re-attempting the delete after a 
> lease acqusition in this case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12508) delete fails with exception when lease is held on blob

2015-10-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975419#comment-14975419
 ] 

Chris Nauroth commented on HADOOP-12508:


[~gouravk], thank you for adding the test, but this doesn't really cover 
execution of the new logic.  (The test passes with and without the main code 
change applied.)  I think the idea behind the test could be made to work if the 
lease was acquired in a background thread, and then the main JUnit thread 
attempted the delete.  We'd then expect the new logic to wait on its own lease 
acquisition and eventually complete the delete.

> delete fails with exception when lease is held on blob
> --
>
> Key: HADOOP-12508
> URL: https://issues.apache.org/jira/browse/HADOOP-12508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Blocker
> Attachments: HADOOP-12508.01.patch, HADOOP-12508.02.patch, 
> HADOOP-12508.03.patch
>
>
> The delete function as implemented by AzureNativeFileSystem store attempts 
> delete without a lease. In most cases this works but in the case of a 
> dangling lease resulting out of say a process killed and leaving a lease 
> dangling for a small period a delete attempted during this period simply 
> crashes. This fix addresses the situation by re-attempting the delete after a 
> lease acqusition in this case



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12515) hadoop-kafka module doesn't resolve mockito related classes after imported into Intellij IDEA

2015-10-26 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-12515:
--

 Summary: hadoop-kafka module doesn't resolve mockito related 
classes after imported into Intellij IDEA
 Key: HADOOP-12515
 URL: https://issues.apache.org/jira/browse/HADOOP-12515
 Project: Hadoop Common
  Issue Type: Test
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor


When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
module doesn't resolve mockito related classes. The following change addressed 
the issue.
{code}
--- a/hadoop-tools/hadoop-kafka/pom.xml
+++ b/hadoop-tools/hadoop-kafka/pom.xml
@@ -125,5 +125,10 @@
   junit
   test
 
+
+  org.mockito
+  mockito-all
+  test
+
   
 
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10949) metrics2 sink plugin for Apache Kafka

2015-10-26 Thread Babak Behzad (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10949?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975192#comment-14975192
 ] 

Babak Behzad commented on HADOOP-10949:
---

There was a hadoop-kafka artifact missing from hadoop-tools-dist's pom.xml 
which was causing the compiled Kafka jar files to be copied to the target dist 
directory. The new patch adds this in order to complete this fix.

> metrics2 sink plugin for Apache Kafka
> -
>
> Key: HADOOP-10949
> URL: https://issues.apache.org/jira/browse/HADOOP-10949
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Fix For: 3.0.0
>
> Attachments: HADOOP-10949-1.patch, HADOOP-10949-2.patch, 
> HADOOP-10949-4.patch, HADOOP-10949-5.patch, HADOOP-10949-6-1.patch, 
> HADOOP-10949-6.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch
>
>
> Write a metrics2 sink plugin for Hadoop to send metrics directly to Apache 
> Kafka in addition to the current, Graphite 
> ([Hadoop-9704|https://issues.apache.org/jira/browse/HADOOP-9704]), Ganglia 
> and File sinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10949) metrics2 sink plugin for Apache Kafka

2015-10-26 Thread Babak Behzad (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10949?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Babak Behzad updated HADOOP-10949:
--
Attachment: HADOOP-10949-6-1.patch

> metrics2 sink plugin for Apache Kafka
> -
>
> Key: HADOOP-10949
> URL: https://issues.apache.org/jira/browse/HADOOP-10949
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Reporter: Babak Behzad
>Assignee: Babak Behzad
> Fix For: 3.0.0
>
> Attachments: HADOOP-10949-1.patch, HADOOP-10949-2.patch, 
> HADOOP-10949-4.patch, HADOOP-10949-5.patch, HADOOP-10949-6-1.patch, 
> HADOOP-10949-6.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch, 
> HADOOP-10949.patch, HADOOP-10949.patch, HADOOP-10949.patch
>
>
> Write a metrics2 sink plugin for Hadoop to send metrics directly to Apache 
> Kafka in addition to the current, Graphite 
> ([Hadoop-9704|https://issues.apache.org/jira/browse/HADOOP-9704]), Ganglia 
> and File sinks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2015-10-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975384#comment-14975384
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-12406:
--

[~ndouba], is this happening inside a MapReduce job running on top of YARN? 
MapReduce does have job.jar in the system classpath, so that's not explainable.

The only way this may happen is if you are using the MapReduce JobClassLoader 
for this - please let us know the values for mapreduce.job.classloader in your 
job.

> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Priority: Blocker
>  Labels: bug, hadoop, io, newbie, patch-available
> Attachments: HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12038) SwiftNativeOutputStream should check whether a file exists or not before deleting

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12038?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975416#comment-14975416
 ] 

Hadoop QA commented on HADOOP-12038:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 36s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 19s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 16s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 31m 17s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-10-26 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12735755/HADOOP-12038.000.patch
 |
| JIRA Issue | HADOOP-12038 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux dcd8c007166e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 56e4f62 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 

[jira] [Updated] (HADOOP-12515) hadoop-kafka module doesn't resolve mockito related classes after imported into Intellij IDEA

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12515:
---
Attachment: HADOOP-12515-v1.patch

Uploaded the change as patch for review.

> hadoop-kafka module doesn't resolve mockito related classes after imported 
> into Intellij IDEA
> -
>
> Key: HADOOP-12515
> URL: https://issues.apache.org/jira/browse/HADOOP-12515
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>Priority: Minor
> Attachments: HADOOP-12515-v1.patch
>
>
> When importing Hadoop project into Intellij IDEA, it was found hadoop-kafka 
> module doesn't resolve mockito related classes. The following change 
> addressed the issue.
> {code}
> --- a/hadoop-tools/hadoop-kafka/pom.xml
> +++ b/hadoop-tools/hadoop-kafka/pom.xml
> @@ -125,5 +125,10 @@
>junit
>test
>  
> +
> +  org.mockito
> +  mockito-all
> +  test
> +
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-26 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12514:
---
Description: 
This is a follow up of [HADOOP-12472].

It makes sense to make the following static fields package private instead of 
protected, as they are for test purpose and {{TestGenericTestUtils}} is in the 
same package as {{GenericTestUtils}}.

-  protected static String E_NULL_THROWABLE = "Null Throwable";
-  protected static String E_NULL_THROWABLE_STRING = "Null Throwable.toString() 
value";
-  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
exception";

Meanwhile, we may need to make them final.

  was:
This is a follow up of [HADOOP-12472].

It makes sense to make the following static fields package private instead of 
protected, as they are for test purpose (protected keyword makes more sense to 
sub-classes).

-  protected static String E_NULL_THROWABLE = "Null Throwable";
-  protected static String E_NULL_THROWABLE_STRING = "Null Throwable.toString() 
value";
-  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
exception";

Meanwhile, we may need to make them final.


> Make static fields in GenericTestUtils for assertExceptionContains() 
> package-private and final
> --
>
> Key: HADOOP-12514
> URL: https://issues.apache.org/jira/browse/HADOOP-12514
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Minor
> Attachments: HADOOP-12514.000.patch
>
>
> This is a follow up of [HADOOP-12472].
> It makes sense to make the following static fields package private instead of 
> protected, as they are for test purpose and {{TestGenericTestUtils}} is in 
> the same package as {{GenericTestUtils}}.
> -  protected static String E_NULL_THROWABLE = "Null Throwable";
> -  protected static String E_NULL_THROWABLE_STRING = "Null 
> Throwable.toString() value";
> -  protected static String E_UNEXPECTED_EXCEPTION = "but got unexpected 
> exception";
> Meanwhile, we may need to make them final.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975210#comment-14975210
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8710 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8710/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12040) Adjust inputs order for the decode API in raw erasure coder

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12040:
---
Attachment: HADOOP-12040-v3.patch

Updated the patch addressing the found checking style issues. Failed tests were 
not related.

> Adjust inputs order for the decode API in raw erasure coder
> ---
>
> Key: HADOOP-12040
> URL: https://issues.apache.org/jira/browse/HADOOP-12040
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12040-HDFS-7285-v1.patch, HADOOP-12040-v2.patch, 
> HADOOP-12040-v3.patch
>
>
> Currently we used the parity units + data units order for the inputs, 
> erasedIndexes and outputs parameters in the decode call in raw erasure coder, 
> which inherited from HDFS-RAID due to impact enforced by {{GaliosField}}. As 
> [~zhz] pointed and [~hitliuyi] felt, we'd better change the order to make it 
> natural for HDFS usage, where usually data blocks are before parity blocks in 
> a group. Doing this would avoid some reordering tricky logic.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975264#comment-14975264
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #599 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/599/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975337#comment-14975337
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2477 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2477/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975397#comment-14975397
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2530 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2530/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975467#comment-14975467
 ] 

Hadoop QA commented on HADOOP-12385:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 26s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 15s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 14s 
{color} | {color:red} Patch generated 3 new checkstyle issues in 
hadoop-common-project/hadoop-common (total was 98, now 101). {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
46s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 5s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_60. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 58s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_79 Failed junit tests | hadoop.ipc.TestIPC |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768815/HADOOP-12385-003.patch
 |
| JIRA Issue | HADOOP-12385 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 522b9b04c320 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12508) delete fails with exception when lease is held on blob

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975468#comment-14975468
 ] 

Hadoop QA commented on HADOOP-12508:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
50s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 3 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 8s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 23s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 28s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768829/HADOOP-12508.03.patch 
|
| JIRA Issue | HADOOP-12508 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 7384905fbce0 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 56e4f62 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| whitespace | 

[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975465#comment-14975465
 ] 

Tsuyoshi Ozawa commented on HADOOP-9613:


Thank you for the comment. Hmm, let me try again. 

> [JDK8] Update jersey version to latest 1.x release
> --
>
> Key: HADOOP-9613
> URL: https://issues.apache.org/jira/browse/HADOOP-9613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.4.0, 3.0.0
>Reporter: Timothy St. Clair
>Assignee: Tsuyoshi Ozawa
>  Labels: maven
> Attachments: HADOOP-2.2.0-9613.patch, 
> HADOOP-9613.004.incompatible.patch, HADOOP-9613.005.incompatible.patch, 
> HADOOP-9613.006.incompatible.patch, HADOOP-9613.1.patch, HADOOP-9613.2.patch, 
> HADOOP-9613.3.patch, HADOOP-9613.patch
>
>
> Update pom.xml dependencies exposed when running a mvn-rpmbuild against 
> system dependencies on Fedora 18.  
> The existing version is 1.8 which is quite old. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11685) StorageException complaining " no lease ID" during HBase distributed log splitting

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975476#comment-14975476
 ] 

Hadoop QA commented on HADOOP-11685:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 6s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
54s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
31s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s 
{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 14s 
{color} | {color:red} hadoop-azure in the patch failed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 14s {color} 
| {color:red} hadoop-azure in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-azure in the patch failed with JDK v1.7.0_79. 
{color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 16s {color} 
| {color:red} hadoop-azure in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 1 line(s) with tabs. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-azure in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-azure in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 18s {color} 
| {color:red} hadoop-azure in the patch failed with JDK v1.7.0_79. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
29s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 8m 55s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.0 Server=1.7.0 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768806/HADOOP-11685.05.patch 
|
| JIRA Issue | HADOOP-11685 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux 7141a833b3e3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 56e4f62 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_66 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| 

[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975491#comment-14975491
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

This looks like a bug reported by [~gtCarrera9] on HADOOP-11776. 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868

I will create new issue to address the problem.
The patch itself works well. +1, checking this in.

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975493#comment-14975493
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #540 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/540/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975662#comment-14975662
 ] 

Hudson commented on HADOOP-12457:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2478 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2478/])
HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc. (ozawa: rev 
ea6b183a1a649ad2874050ade8856286728c654c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix compilation of common by javadoc

2015-10-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975519#comment-14975519
 ] 

Li Lu commented on HADOOP-12457:


Thanks for the pointer [~ozawa]! I do remember this problem when working with 
jdiff. I can provide some more detail if you need in the new issue. 

> [JDK8] Fix compilation of common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9613) [JDK8] Update jersey version to latest 1.x release

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975764#comment-14975764
 ] 

Hadoop QA commented on HADOOP-9613:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 8s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 20 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
5s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
40s {color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 35s 
{color} | {color:red} hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common in 
trunk cannot run convertXmlToText from findbugs {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 45s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 55s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 29s 
{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s 
{color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch 
failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 21s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s 
{color} | {color:red} hadoop-mapreduce-client-app in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 16s 
{color} | {color:red} hadoop-mapreduce-client-hs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 27s 
{color} | {color:red} root-jdk1.8.0_60 with JDK v1.8.0_60 has problems. {color} 
|
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 17s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red} 22m 40s 
{color} | {color:red} root-jdk1.7.0_79 with JDK v1.7.0_79 has problems. {color} 
|
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s 
{color} | {color:red} hadoop-yarn-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 20s 
{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 4m 0s 
{color} | {color:green} the patch passed 

[jira] [Commented] (HADOOP-12472) Make GenericTestUtils.assertExceptionContains robust

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975507#comment-14975507
 ] 

Hudson commented on HADOOP-12472:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #588 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/588/])
HADOOP-12472. Make GenericTestUtils.assertExceptionContains robust. (jing9: rev 
a01a209fbed33b2ecaf9e736631e64abefae01aa)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/TestGenericTestUtils.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/test/GenericTestUtils.java


> Make GenericTestUtils.assertExceptionContains robust
> 
>
> Key: HADOOP-12472
> URL: https://issues.apache.org/jira/browse/HADOOP-12472
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12472-001.patch
>
>
> {{GenericTestUtils.assertExceptionContains}} calls 
> {{Exception.getMessage()}}, followed by msg.contains().
> This will NPE for an exception with a null message, such as NPE.
> # it should call toString()
> # and do an assertNotNull on the result in case some subclass does something 
> very bad
> # and for safety, check the asser



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12514) Make static fields in GenericTestUtils for assertExceptionContains() package-private and final

2015-10-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12514?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975536#comment-14975536
 ] 

Hadoop QA commented on HADOOP-12514:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 5s 
{color} | {color:blue} docker + precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 21s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
32s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 23s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 4m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_60 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} the patch passed with JDK v1.7.0_79 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 43s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_60. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 52s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_79. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.7.1 Server=1.7.1 
Image:test-patch-base-hadoop-date2015-10-27 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12768839/HADOOP-12514.000.patch
 |
| JIRA Issue | HADOOP-12514 |
| Optional Tests |  asflicense  javac  javadoc  mvninstall  unit  findbugs  
checkstyle  compile  |
| uname | Linux c7db656fd0e8 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/apache-yetus-b9c369f/dev-support/personality/hadoop.sh
 |
| git revision | trunk / 56e4f62 |
| Default Java | 1.7.0_79 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_60 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_79 |
| findbugs | v3.0.0 |
| JDK v1.7.0_79  Test Results | 

[jira] [Updated] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12457:

Summary: [JDK8] Fix a failure of compiling common by javadoc  (was: [JDK8] 
Fix compilation of common by javadoc)

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-12516:
---

 Summary: jdiff fails with error 'duplicate comment id' about 
MetricsSystem.register_changed
 Key: HADOOP-12516
 URL: https://issues.apache.org/jira/browse/HADOOP-12516
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tsuyoshi Ozawa


"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the {@link Find} 
command and its {@link Expression}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868]




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Attachment: HADOOP-11887-v7.patch

Rebased the patch on trunk.

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12516) jdiff fails with error 'duplicate comment id' about MetricsSystem.register_changed

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12516:

Description: 
"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the \{@link Find\} 
command and its \{@link Expression\}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].


  was:
"mvn package -Pdist,docs -DskipTests" fails with following error. It looks like 
jdiff problem as Li Lu mentioned on HADOOP-11776.

{quote}
  [javadoc] ExcludePrivateAnnotationsJDiffDoclet
  [javadoc] JDiff: doclet started ...
  [javadoc] JDiff: reading the old API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
 API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
the file 'Apache_Hadoop_Common_2.6.0.xml'

  ...

  [javadoc] JDiff: reading the new API in from file 
'/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
 incorrectly formatted @link in text: Options to be used by the {@link Find} 
command and its {@link Expression}s.

  

  [javadoc] Error: duplicate comment id: 
org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
java.lang.String, T)
{quote}

A link to the comment by Li lu is [here| 
https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868]



> jdiff fails with error 'duplicate comment id' about 
> MetricsSystem.register_changed
> --
>
> Key: HADOOP-12516
> URL: https://issues.apache.org/jira/browse/HADOOP-12516
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>
> "mvn package -Pdist,docs -DskipTests" fails with following error. It looks 
> like jdiff problem as Li Lu mentioned on HADOOP-11776.
> {quote}
>   [javadoc] ExcludePrivateAnnotationsJDiffDoclet
>   [javadoc] JDiff: doclet started ...
>   [javadoc] JDiff: reading the old API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/dev-support/jdiff/Apache_Hadoop_Common_2.6.0.xml'...Warning:
>  API identifier in the XML file (hadoop-core 2.6.0) differs from the name of 
> the file 'Apache_Hadoop_Common_2.6.0.xml'
>   ...
>   [javadoc] JDiff: reading the new API in from file 
> '/home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/target/site/jdiff/xml/Apache_Hadoop_Common_2.8.0-SNAPSHOT.xml'...Warning:
>  incorrectly formatted @link in text: Options to be used by the \{@link 
> Find\} command and its \{@link Expression\}s.
>   
>   [javadoc] Error: duplicate comment id: 
> org.apache.hadoop.metrics2.MetricsSystem.register_changed(java.lang.String, 
> java.lang.String, T)
> {quote}
> A link to the comment by Li lu is [here| 
> https://issues.apache.org/jira/browse/HADOOP-11776?focusedCommentId=14391868=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14391868].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975552#comment-14975552
 ] 

Tsuyoshi Ozawa commented on HADOOP-12457:
-

[~gtCarrera9] Opened HADOOP-12516. Could you give us the detail of the problem 
there?

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12457:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~ajisakaa] for your contribution 
and thanks [~ste...@apache.org]  for your review.

> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Target Version/s:   (was: HDFS-7285)

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Status: Patch Available  (was: Open)

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11887) Introduce Intel ISA-L erasure coding library for the native support

2015-10-26 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11887?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11887:
---
Status: Open  (was: Patch Available)

> Introduce Intel ISA-L erasure coding library for the native support
> ---
>
> Key: HADOOP-11887
> URL: https://issues.apache.org/jira/browse/HADOOP-11887
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-11887-HDFS-7285-v6.patch, HADOOP-11887-v1.patch, 
> HADOOP-11887-v2.patch, HADOOP-11887-v3.patch, HADOOP-11887-v4.patch, 
> HADOOP-11887-v5.patch, HADOOP-11887-v5.patch, HADOOP-11887-v7.patch
>
>
> This is to introduce Intel ISA-L erasure coding library for the native 
> support, via dynamic loading mechanism (dynamic module, like *.so in *nix and 
> *.dll on Windows).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12457) [JDK8] Fix a failure of compiling common by javadoc

2015-10-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14975679#comment-14975679
 ] 

Hudson commented on HADOOP-12457:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2531 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2531/])
HADOOP-12457. [JDK8] Fix a failure of compiling common by javadoc. (ozawa: rev 
ea6b183a1a649ad2874050ade8856286728c654c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Delete.java


> [JDK8] Fix a failure of compiling common by javadoc
> ---
>
> Key: HADOOP-12457
> URL: https://issues.apache.org/jira/browse/HADOOP-12457
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Tsuyoshi Ozawa
>Assignee: Akira AJISAKA
> Fix For: 2.8.0
>
> Attachments: HADOOP-12457.00.patch, HADOOP-12457.01.patch, 
> HADOOP-12457.02.patch
>
>
> Delete.java and Server.java cannot be compiled with "unmappable character ofr 
> encoding ASCII". 
> {quote}
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc] ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]  ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
>   [javadoc]   ^
>   [javadoc] 
> /home/ubuntu/hadoop-dev/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java:416:
>  error: unmappable character for encoding ASCII
>   [javadoc]* by searching for 68???95???99.7 rule. We flag an RPC as slow 
> RPC
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >