[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481869#comment-14481869
 ] 

Arpit Agarwal commented on HADOOP-11801:


Thanks for contributing the doc update [~gliptak]. Is there a package specific 
to protobuf 2.5.0? 

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11252) RPC client write does not time out by default

2015-04-06 Thread Ray Chiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482010#comment-14482010
 ] 

Ray Chiang commented on HADOOP-11252:
-

+1 (non-binding).  The code changes look fine to me.  As an easy test of its 
effect, I'm able to cause problems on my cluster by setting the value to 1ms.  
Reasonable values run fine for me.

I'd like to see the new property properly documented.  From what I can see, 
core-default.xml looks like the right place.

 RPC client write does not time out by default
 -

 Key: HADOOP-11252
 URL: https://issues.apache.org/jira/browse/HADOOP-11252
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.5.0
Reporter: Wilfred Spiegelenburg
Assignee: Wilfred Spiegelenburg
Priority: Critical
 Attachments: HADOOP-11252.patch


 The RPC client has a default timeout set to 0 when no timeout is passed in. 
 This means that the network connection created will not timeout when used to 
 write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
 writes then fall back to the tcp level retry (configured via tcp_retries2) 
 and timeouts between the 15-30 minutes. Which is too long for a default 
 behaviour.
 Using 0 as the default value for timeout is incorrect. We should use a sane 
 value for the timeout and the ipc.ping.interval configuration value is a 
 logical choice for it. The default behaviour should be changed from 0 to the 
 value read for the ping interval from the Configuration.
 Fixing it in common makes more sense than finding and changing all other 
 points in the code that do not pass in a timeout.
 Offending code lines:
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
 and 
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-08.patch

-08:
* borrow some code from the folks at HBase to help make the checkstyle check 
more useful
* add an EOL whitespace check bz i'm tired of checking for it myself. :D
* fix some extraneous output when no tests have failed


 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-08.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11809) Building hadoop on windows 64 bit, windows 7.1 SDK : \hadoop-common\target\findbugsXml.xml does not exist

2015-04-06 Thread Umesh Kant (JIRA)
Umesh Kant created HADOOP-11809:
---

 Summary: Building hadoop on windows 64 bit, windows 7.1 SDK : 
\hadoop-common\target\findbugsXml.xml does not exist
 Key: HADOOP-11809
 URL: https://issues.apache.org/jira/browse/HADOOP-11809
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.6.0
Reporter: Umesh Kant


I am trying to build hadoop 2.6.0 on Windows 7 64 bit, Windows 7.1 SDK. I have 
gone through Build.txt file and have did follow all the pre-requisites for 
build on windows. Still when I try to build, I am getting following error:

Maven command: mvn package -X -Pdist -Pdocs -Psrc -Dtar -DskipTests 
-Pnative-win findbugs:findbugs

[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 04:35 min
[INFO] Finished at: 2015-04-03T23:16:57-04:00
[INFO] Final Memory: 123M/1435M
[INFO] 
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:
run (site) on project hadoop-common: An Ant BuildException has occured: input fi
le C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target\findbugsXml.
xml does not exist
[ERROR] around Ant part ...xslt in=C:\H\hadoop-2.6.0-src\hadoop-common-project
\hadoop-common\target/findbugsXml.xml style=C:\findbugs-3.0.1/src/xsl/default.
xsl out=C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/
findbugs.html/... @ 44:232 in C:\H\hadoop-2.6.0-src\hadoop-common-project\hado
op-common\target\antrun\build-main.xml
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal o
rg.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project hadoop-com
mon: An Ant BuildException has occured: input file C:\H\hadoop-2.6.0-src\hadoop-
common-project\hadoop-common\target\findbugsXml.xml does not exist
around Ant part ...xslt in=C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-
common\target/findbugsXml.xml style=C:\findbugs-3.0.1/src/xsl/default.xsl out
=C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/findbugs
.html/... @ 44:232 in C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-commo
n\target\antrun\build-main.xml
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:216)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:153)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor
.java:145)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:116)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProje
ct(LifecycleModuleBuilder.java:80)
at org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThre
adedBuilder.build(SingleThreadedBuilder.java:51)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(Lifecycl
eStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:307)
at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:193)
at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:106)
at org.apache.maven.cli.MavenCli.execute(MavenCli.java:862)
at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:286)
at org.apache.maven.cli.MavenCli.main(MavenCli.java:197)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.
java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces
sorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Laun
cher.java:289)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.jav
a:229)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(La
uncher.java:415)
at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:
356)
Caused by: org.apache.maven.plugin.MojoExecutionException: An Ant BuildException
 has occured: input file C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-comm
on\target\findbugsXml.xml does not exist
around Ant part ...xslt in=C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-
common\target/findbugsXml.xml style=C:\findbugs-3.0.1/src/xsl/default.xsl out
=C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-common\target/site/findbugs
.html/... @ 44:232 in C:\H\hadoop-2.6.0-src\hadoop-common-project\hadoop-commo
n\target\antrun\build-main.xml
at org.apache.maven.plugin.antrun.AntRunMojo.execute(AntRunMojo.java:355
)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(Default
BuildPluginManager.java:134)
at 

[jira] [Commented] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482173#comment-14482173
 ] 

Zhe Zhang commented on HADOOP-11805:


Thanks for the work Kai. So the main updates are:
# Xor... - XOR...
#JRS... - RS...

Is that right? If so the main direction sounds good to me.

 Better to rename some raw erasure coders
 

 Key: HADOOP-11805
 URL: https://issues.apache.org/jira/browse/HADOOP-11805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11805-v1.patch


 When work on more coders, it was found better to rename some existing raw 
 coders for consistency, and more meaningful. As a result, we may have:
 XORRawErasureCoder, in Java
 NativeXORRawErasureCoder, in native
 RSRawErasureCoder, in Java
 NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2015-04-06 Thread Rick Kellogg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481998#comment-14481998
 ] 

Rick Kellogg commented on HADOOP-11090:
---

Suggest we look into using Animal Sniffer 
(http://mojo.codehaus.org/animal-sniffer/) to ensure API compatibility between 
releases.

The folks at Spring Source use it to help with Java 6-8 compatibility as well.

See: 
http://spring.io/blog/2015/04/03/how-spring-achieves-compatibility-with-java-6-7-and-8

 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Status: Open  (was: Patch Available)

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-08.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Status: Patch Available  (was: Open)

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-08.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-09.patch

-09:
* -08 was the wrong file. woops.

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-08.patch, HADOOP-11746-09.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: (was: HADOOP-11746-08.patch)

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-09.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11805) Better to rename some raw erasure coders

2015-04-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482209#comment-14482209
 ] 

Kai Zheng commented on HADOOP-11805:


Yes, you're right. For native ones, I would come up in this naming style 
{{NativeXORRawErasureCoder}} and {{NativeRSRawErasureCoder}} (using ISA-L) 
later. Without _Native_ prefix, it would be meant to be in implemented Java.

 Better to rename some raw erasure coders
 

 Key: HADOOP-11805
 URL: https://issues.apache.org/jira/browse/HADOOP-11805
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11805-v1.patch


 When work on more coders, it was found better to rename some existing raw 
 coders for consistency, and more meaningful. As a result, we may have:
 XORRawErasureCoder, in Java
 NativeXORRawErasureCoder, in native
 RSRawErasureCoder, in Java
 NativeRSRawErasureCoder, in native and using ISA-L



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482083#comment-14482083
 ] 

Hadoop QA commented on HADOOP-11746:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723454/HADOOP-11746-08.patch
  against trunk revision 3fb5abf.

{color:red}-1 @author{color}.  The patch appears to contain 13 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6068//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6068//artifact/patchprocess/patchReleaseAuditProblems.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6068//console

This message is automatically generated.

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-09.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11801) Update BUILDING.txt

2015-04-06 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-11801:
---
Assignee: Gabor Liptak

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Gabor Liptak
Assignee: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-06 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482489#comment-14482489
 ] 

Arpit Agarwal commented on HADOOP-11801:


Okay can we just add _required_ after protobuf 2.5.0 to emphasize the version? 
I don't think we support 2.6.x and newer, in case Ubuntu upgrades the package 
version.
{code}
-* ProtocolBuffer 2.5.0
+* ProtocolBuffer 2.5.0 (required)
   $ sudo apt-get -y install libprotobuf-dev protobuf-compiler
{code}

Thanks.

 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Gabor Liptak
Assignee: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482373#comment-14482373
 ] 

Hadoop QA commented on HADOOP-11746:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12723460/HADOOP-11746-09.patch
  against trunk revision 3fb5abf.

{color:red}-1 @author{color}.  The patch appears to contain 13 @author tags 
which the Hadoop community has agreed to not allow in code contributions.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1148 javac 
compiler warnings (more than the trunk's current 208 warnings).

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
43 warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6069//artifact/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 
release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.TestDFSOutputStream

  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6069//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6069//artifact/patchprocess/patchReleaseAuditProblems.txt
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6069//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6069//console

This message is automatically generated.

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch, HADOOP-11746-03.patch, HADOOP-11746-04.patch, 
 HADOOP-11746-05.patch, HADOOP-11746-06.patch, HADOOP-11746-07.patch, 
 HADOOP-11746-09.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11801) Update BUILDING.txt

2015-04-06 Thread Gabor Liptak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482355#comment-14482355
 ] 

Gabor Liptak commented on HADOOP-11801:
---

Arpit,

Ubuntu 14.04 comes with protobuf 2.5.0*

http://packages.ubuntu.com/trusty/libprotobuf-dev

and it likely stays 2.5.0*

A specific version can be installed by running like:

sudo apt-get -y install libprotobuf-dev=2.5.0-9ubuntu1 
protobuf-compiler=2.5.0-9ubuntu1

but we will not get any patches this way ...



 Update BUILDING.txt
 ---

 Key: HADOOP-11801
 URL: https://issues.apache.org/jira/browse/HADOOP-11801
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Gabor Liptak
Priority: Minor
 Attachments: HADOOP-11801.patch


 ProtocolBuffer is packaged in Ubuntu



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-04-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14482617#comment-14482617
 ] 

Kai Zheng commented on HADOOP-11717:


Thanks for the update.
bq.there is no reason to encrypt and decrypt here - the assumption is that they 
are to be protected with transport security.
Sorry for my late response. I don't quite agree with this, as transport 
security is used to protect token from being leaked, and the encryption layer 
in token itself can be used to protect sensitive identity privacy, which means 
if someone loses his/her token, he/she won't suffer from exposing any sensitive 
credential identity attributes. I thought that's why JWE is defined and 
required. I understand in your case/requirement/scenario you may not use 
encryption feature, but it can be useful for others. I do think we can leave 
this aspect for future consideration, like tasks in HADOOP-11766. 

I missed to mention another point. Why we couple this with 
{{AltKerberosAuthenticationHandler}}? I thought a dedicated authentication 
handler like {{AuthTokenAuthenticationHandler}} may make more sense. Still, 
this may be handled separately if sounds good.

Another point made by [~wheat9] here, which was also widely discussed before, 
it would be good to have a generic token abstract from JWT stuff. Again, I 
would follow up on this in HADOOP-11766 tasks.

We still need to think about how to apply this new mechanism across all the web 
interfaces for HDFS and YARN. Will fire another task for this.

Related to above points, to make the work more general, I suggest we change the 
following configuration items.
authentication.provider.url = token.authentication.provider.url
public.key.pem = token.signature.publickey;
expected.jwt.audiences = expected.token.audiences;

Maybe it's a little late to mention these. What I'm worrying about is once we 
have this in the trunk, we're then not able to enhance or follow up easily in 
the following. Any good idea for the concern? Thanks!



 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch, HADOOP-11717-8.patch, 
 RedirectingWebSSOwithJWTforHadoopWebUIs.pdf


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481047#comment-14481047
 ] 

Vinayakumar B commented on HADOOP-11645:


Seems like this needs a rebase.

 Erasure Codec API covering the essential aspects for an erasure code
 

 Key: HADOOP-11645
 URL: https://issues.apache.org/jira/browse/HADOOP-11645
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch


 This is to define the even higher level API *ErasureCodec* to possiblly 
 consider all the essential aspects for an erasure code, as discussed in in 
 HDFS-7337 in details. Generally, it will cover the necessary configurations 
 about which *RawErasureCoder* to use for the code scheme, how to form and 
 layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
 will be used in both client and DataNode, in all the supported modes related 
 to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481126#comment-14481126
 ] 

Hadoop QA commented on HADOOP-11772:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723325/HADOOP-11772-wip-002.patch
  against trunk revision 96d7211.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6066//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6066//console

This message is automatically generated.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, dfs-sync-ipc.png, sync-client-bt.png, 
 sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11665) Provide and unify cross platform byteorder support in native code

2015-04-06 Thread Ayappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481093#comment-14481093
 ] 

Ayappan commented on HADOOP-11665:
--

Any update here? This issue has been lingering around for a long time.

 Provide and unify cross platform byteorder support in native code
 -

 Key: HADOOP-11665
 URL: https://issues.apache.org/jira/browse/HADOOP-11665
 Project: Hadoop Common
  Issue Type: Bug
  Components: native, util
Affects Versions: 2.4.1, 2.6.0
 Environment: PowerPC Big Endian  other Big Endian platforms
Reporter: Binglin Chang
Assignee: Binglin Chang
 Attachments: HADOOP-11665.001.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11772) RPC Invoker relies on static ClientCache which has synchronized(this) blocks

2015-04-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11772:
---
Attachment: HADOOP-11772-wip-002.patch

Fixed the test failure.

 RPC Invoker relies on static ClientCache which has synchronized(this) blocks
 

 Key: HADOOP-11772
 URL: https://issues.apache.org/jira/browse/HADOOP-11772
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: ipc, performance
Reporter: Gopal V
Assignee: Akira AJISAKA
 Attachments: HADOOP-11772-001.patch, HADOOP-11772-wip-001.patch, 
 HADOOP-11772-wip-002.patch, dfs-sync-ipc.png, sync-client-bt.png, 
 sync-client-threads.png


 {code}
   private static ClientCache CLIENTS=new ClientCache();
 ...
 this.client = CLIENTS.getClient(conf, factory);
 {code}
 Meanwhile in ClientCache
 {code}
 public synchronized Client getClient(Configuration conf,
   SocketFactory factory, Class? extends Writable valueClass) {
 ...
Client client = clients.get(factory);
 if (client == null) {
   client = new Client(valueClass, conf, factory);
   clients.put(factory, client);
 } else {
   client.incCount();
 }
 {code}
 All invokers end up calling these methods, resulting in IPC clients choking 
 up.
 !sync-client-threads.png!
 !sync-client-bt.png!
 !dfs-sync-ipc.png!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11645) Erasure Codec API covering the essential aspects for an erasure code

2015-04-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481046#comment-14481046
 ] 

Vinayakumar B commented on HADOOP-11645:


+1, patch looks good.

 Erasure Codec API covering the essential aspects for an erasure code
 

 Key: HADOOP-11645
 URL: https://issues.apache.org/jira/browse/HADOOP-11645
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11645-v1.patch, HADOOP-11645-v2.patch


 This is to define the even higher level API *ErasureCodec* to possiblly 
 consider all the essential aspects for an erasure code, as discussed in in 
 HDFS-7337 in details. Generally, it will cover the necessary configurations 
 about which *RawErasureCoder* to use for the code scheme, how to form and 
 layout the BlockGroup, and etc. It will also discuss how an *ErasureCodec* 
 will be used in both client and DataNode, in all the supported modes related 
 to EC.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11649) Allow to configure multiple erasure codecs

2015-04-06 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481053#comment-14481053
 ] 

Vinayakumar B commented on HADOOP-11649:


One question I have is,

1. java ServiceLoader will load all the implementations of specified type in 
the entire classpath. Right? Will it miss any impl to load?
2. The codes specified via configurations also should be in the classpath to 
get successfully added. IMO these also will be already loaded by java 
ServiceLoader. right?
If all the codecs are loaded by ServiceLoader, then there is no need of 
configuration to add more codecs.

If the above is true, then
configuration to specify codes, can be used as a enabler to use only codecs 
specified in configuration, not all available in classpath( which are loaded  
by ServiceLoader).


Coming to patch,

1. conf name specified in {{public static final String IO_ERASURE_CODECS_KEY = 
io.erasure.codecs;}} and {{core-default.xml}} are different.
2. ErasureCodec.java, when there is a explicit dependency mentioned on this 
Jira, then this class need not be included in the patch.
3. ErasureCodecLoader#load(..) should be changed based on the ans of above 
question
4. {{io.erasurecode.codec.rs.rawcoder}} adding this configuration seems 
unrelated to this Jira

 Allow to configure multiple erasure codecs
 --

 Key: HADOOP-11649
 URL: https://issues.apache.org/jira/browse/HADOOP-11649
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11649-v1.patch


 This is to allow to configure erasure codec and coder in core-site 
 configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481250#comment-14481250
 ] 

kanaka kumar avvaru commented on HADOOP-10366:
--

As the issue is stale from about a year, I would like to work on it and give a 
patch. [~chengwei-yang] if you would like to continue work on it, please feel 
free to assign it back to you.

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru reassigned HADOOP-10366:


Assignee: kanaka kumar avvaru

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-10366:
-
Attachment: HADOOP-10366-wrap01.patch

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, 
 HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481263#comment-14481263
 ] 

kanaka kumar avvaru commented on HADOOP-10366:
--

Attached a patch HADOOP-10366-wrap01.patch  as per the suggestions in the 
earlier comments to wrap the value with line size of 80.

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, 
 HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481317#comment-14481317
 ] 

Colin Patrick McCabe commented on HADOOP-11758:
---

Thanks for attaching this.  I've been meaning to put up some more real-world 
data from my hdfs cluster myself.

It looks like we still have a problem with overly long span names in a few 
places... e.g., {{org.apache.hadoop.hdfs.protocol.ClientProtocol.complete}} 
should really be {{ClientProtocol#complete}}.  Let me see if I can find where 
it's doing this...

 Add options to filter out too much granular tracing spans
 -

 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: testWriteTraceHooks-HDFS-8026.html, 
 testWriteTraceHooks.html


 in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-10366:
-
Attachment: HADOOP-10366-wrap01

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366-wrap01, HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread kanaka kumar avvaru (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

kanaka kumar avvaru updated HADOOP-10366:
-
Status: Patch Available  (was: Open)

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, 
 HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10366) [Doc] wrap value of io.serializations in core-default.xml to fit better in browser

2015-04-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10366?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481315#comment-14481315
 ] 

Hadoop QA commented on HADOOP-10366:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12723357/HADOOP-10366-wrap01.patch
  against trunk revision 53959e6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6067//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6067//console

This message is automatically generated.

 [Doc] wrap value of io.serializations in core-default.xml to fit better in 
 browser
 --

 Key: HADOOP-10366
 URL: https://issues.apache.org/jira/browse/HADOOP-10366
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 3.0.0
Reporter: Chengwei Yang
Assignee: kanaka kumar avvaru
Priority: Trivial
  Labels: documentation, newbie, patch
 Attachments: HADOOP-10366-wrap01, HADOOP-10366-wrap01.patch, 
 HADOOP-10366.patch


 The io.serialization property in core-default.xml has a very long value in a 
 single line, as below
 {code}
 property
   nameio.serializations/name
   
 valueorg.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization/value
   descriptionA list of serialization classes that can be used for
   obtaining serializers and deserializers./description
 /property
 {code}
 which not only break the code style (a very long line) but also not fit well 
 in browser. Due to this single very long line, the description column can 
 not show in browser by default



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-04-06 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481354#comment-14481354
 ] 

Colin Patrick McCabe commented on HADOOP-11731:
---

bq. Steve said: I don't think CHANGES.TXT works that well. We may think it 
does, but that's because without the tooling to validate it, stuff doesn't get 
added and so it can omit a lot of work. Then there's the problem of merging 
across branches, and dealing with race conditions/commit conflict between other 
people's work and yours.

Absolutely.  CHANGES.txt does not work that well for many reasons.  It's often 
incorrect, as Allen pointed out, since it's entirely manually created.  It 
creates many spurious merge conflicts.

bq. Tsz Wo Nicholas Sze wrote: Again, I do not oppose using the new tool. 
However, we do need a transition period to see if it indeed works well.

I agree.  However, given that we are going to be maintaining branch-2 for at 
least another 2 years (we don't even have a 3.x release roadmap yet), it seems 
like that transition period should be in the 2.8 timeframe.  It would also be 
nice to move to this system for point releases.  Otherwise, we end up doing 
CHANGES.txt anyway for backports.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Fix For: 3.0.0

 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch, 
 HADOOP-11731-06.patch, HADOOP-11731-07.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11776) jdiff is broken in Hadoop 2

2015-04-06 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11776?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14481438#comment-14481438
 ] 

Li Lu commented on HADOOP-11776:


Thanks [~vinodkv] for the review and commit. Yes we do have more things to do 
for tools to check API compatibility. Given the current status our long term 
goal may be replacing it with some other tool. 

 jdiff is broken in Hadoop 2
 ---

 Key: HADOOP-11776
 URL: https://issues.apache.org/jira/browse/HADOOP-11776
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Li Lu
Assignee: Li Lu
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11776-040115.patch


 Seems like we haven't touch the API files from jdiff under dev-support for a 
 while. For now we're missing the jdiff API files for hadoop 2. We're also 
 missing YARN when generating the jdiff API files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)