[jira] [Commented] (HADOOP-10388) Pure native hadoop client

2014-04-01 Thread Binglin Chang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956157#comment-13956157
 ] 

Binglin Chang commented on HADOOP-10388:


bq. We can even make the XML-reading code optional if you want.
Sure, if for compatibility I guess add xml support if fine. By keeping strict 
compatibility we may need to support all javax xml / hadoop config features, 
I'm afraid libexpact/libxml2 support all of those, a lot effort may be spent on 
this, it is better to make it optional and do it later I think.

bq. Thread pools and async I/O, I'm afraid, are something we can't live without.
I am also prefer to use async I/O and thread for performance reasons, the code 
I published on github already have a working HDFS client with read/write, and 
HDFSOuputstream uses an aditional thread. 
What I was saying is use of more threads should be limited, in java client, to 
simply read/write a HDFS file, too much threads are used(rpc socket read/write, 
data transfer socket read/write, other misc executors, lease renewer etc.) 
Since we use async i/o, thread number should be rapidly reduced


 Pure native hadoop client
 -

 Key: HADOOP-10388
 URL: https://issues.apache.org/jira/browse/HADOOP-10388
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: HADOOP-10388
Reporter: Binglin Chang
Assignee: Colin Patrick McCabe

 A pure native hadoop client has following use case/advantages:
 1.  writing Yarn applications using c++
 2.  direct access to HDFS, without extra proxy overhead, comparing to web/nfs 
 interface.
 3.  wrap native library to support more languages, e.g. python
 4.  lightweight, small footprint compare to several hundred MB of JDK and 
 hadoop library with various dependencies.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10360) Use 2 network adapter In hdfs read and write

2014-04-01 Thread guodongdong (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956253#comment-13956253
 ] 

guodongdong commented on HADOOP-10360:
--

NIC bonding in   production environment has many problems,  any company uses 
NIC bonding  in production environment ?

 Use 2 network adapter In hdfs read and write
 

 Key: HADOOP-10360
 URL: https://issues.apache.org/jira/browse/HADOOP-10360
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: guodongdong
Priority: Minor
 Fix For: 2.4.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10451) Remove unused field and imports from SaslRpcServer

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956400#comment-13956400
 ] 

Hudson commented on HADOOP-10451:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #526 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/526/])
HADOOP-10451. Remove unused field and imports from SaslRpcServer. Contributed 
by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583393)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


 Remove unused field and imports from SaslRpcServer
 --

 Key: HADOOP-10451
 URL: https://issues.apache.org/jira/browse/HADOOP-10451
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10451.patch


 There were unused fields and import remained on SaslRpcServer.
 This jira is to remove cleanup those  fields from SaslRpcServer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10345) Sanitize the the inputs (groups and hosts) for the proxyuser configuration

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956403#comment-13956403
 ] 

Hudson commented on HADOOP-10345:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #526 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/526/])
HADOOP-10345. Sanitize the the inputs (groups and hosts) for the proxyuser 
configuration. Contributed by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583454)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java


 Sanitize the the inputs (groups and hosts) for the proxyuser configuration
 --

 Key: HADOOP-10345
 URL: https://issues.apache.org/jira/browse/HADOOP-10345
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-10345.patch, HADOOP-10345.patch, 
 HADOOP-10345.patch, HADOOP-10345.patch, HADOOP-10345.patch


 Currently there are no input cleansing done on  
 hadoop.proxyuser.user-name.groups  and hadoop.proxyuser.user-name.hosts .
 It will be an improvement to trim each value, remove duplicate and empty 
 values during init/refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10451) Remove unused field and imports from SaslRpcServer

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956489#comment-13956489
 ] 

Hudson commented on HADOOP-10451:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1744 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1744/])
HADOOP-10451. Remove unused field and imports from SaslRpcServer. Contributed 
by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583393)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


 Remove unused field and imports from SaslRpcServer
 --

 Key: HADOOP-10451
 URL: https://issues.apache.org/jira/browse/HADOOP-10451
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10451.patch


 There were unused fields and import remained on SaslRpcServer.
 This jira is to remove cleanup those  fields from SaslRpcServer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10345) Sanitize the the inputs (groups and hosts) for the proxyuser configuration

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956492#comment-13956492
 ] 

Hudson commented on HADOOP-10345:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1744 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1744/])
HADOOP-10345. Sanitize the the inputs (groups and hosts) for the proxyuser 
configuration. Contributed by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583454)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java


 Sanitize the the inputs (groups and hosts) for the proxyuser configuration
 --

 Key: HADOOP-10345
 URL: https://issues.apache.org/jira/browse/HADOOP-10345
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-10345.patch, HADOOP-10345.patch, 
 HADOOP-10345.patch, HADOOP-10345.patch, HADOOP-10345.patch


 Currently there are no input cleansing done on  
 hadoop.proxyuser.user-name.groups  and hadoop.proxyuser.user-name.hosts .
 It will be an improvement to trim each value, remove duplicate and empty 
 values during init/refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10454) Provide FileContext version of har file system

2014-04-01 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-10454:
---

 Summary: Provide FileContext version of har file system
 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee


Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10451) Remove unused field and imports from SaslRpcServer

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956520#comment-13956520
 ] 

Hudson commented on HADOOP-10451:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1718 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1718/])
HADOOP-10451. Remove unused field and imports from SaslRpcServer. Contributed 
by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583393)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslRpcServer.java


 Remove unused field and imports from SaslRpcServer
 --

 Key: HADOOP-10451
 URL: https://issues.apache.org/jira/browse/HADOOP-10451
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.3.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Trivial
 Fix For: 2.5.0

 Attachments: HADOOP-10451.patch


 There were unused fields and import remained on SaslRpcServer.
 This jira is to remove cleanup those  fields from SaslRpcServer. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10345) Sanitize the the inputs (groups and hosts) for the proxyuser configuration

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956523#comment-13956523
 ] 

Hudson commented on HADOOP-10345:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1718 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1718/])
HADOOP-10345. Sanitize the the inputs (groups and hosts) for the proxyuser 
configuration. Contributed by Benoy Antony. (jing9: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583454)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/ProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/StringUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestStringUtils.java


 Sanitize the the inputs (groups and hosts) for the proxyuser configuration
 --

 Key: HADOOP-10345
 URL: https://issues.apache.org/jira/browse/HADOOP-10345
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.2.0
Reporter: Benoy Antony
Assignee: Benoy Antony
Priority: Minor
 Fix For: 2.5.0

 Attachments: HADOOP-10345.patch, HADOOP-10345.patch, 
 HADOOP-10345.patch, HADOOP-10345.patch, HADOOP-10345.patch


 Currently there are no input cleansing done on  
 hadoop.proxyuser.user-name.groups  and hadoop.proxyuser.user-name.hosts .
 It will be an improvement to trim each value, remove duplicate and empty 
 values during init/refresh.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10454) Provide FileContext version of har file system

2014-04-01 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10454:


Attachment: HADOOP-10454.patch

[~knoguchi] and [~jlowe] did the actual work.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10454) Provide FileContext version of har file system

2014-04-01 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee updated HADOOP-10454:


Assignee: Kihwal Lee
  Status: Patch Available  (was: Open)

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HADOOP-10454) Provide FileContext version of har file system

2014-04-01 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956530#comment-13956530
 ] 

Kihwal Lee edited comment on HADOOP-10454 at 4/1/14 2:11 PM:
-

Attaching the patch. [~knoguchi] and [~jlowe] did the actual work.


was (Author: kihwal):
[~knoguchi] and [~jlowe] did the actual work.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10454) Provide FileContext version of har file system

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956587#comment-13956587
 ] 

Hadoop QA commented on HADOOP-10454:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638067/HADOOP-10454.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3732//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3732//console

This message is automatically generated.

 Provide FileContext version of har file system
 --

 Key: HADOOP-10454
 URL: https://issues.apache.org/jira/browse/HADOOP-10454
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Attachments: HADOOP-10454.patch


 Add support for HarFs, the FileContext version of HarFileSystem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Attachment: HADOOP-9361-009.patch

-009 patch; specifies the {{FSDataInputStream}}, including 
{{PositionedReadable}}

# the base implementation of those methods don't check for negative values
# although the javadocs say thread safe, not all the implementations are

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9361:
---

Status: Patch Available  (was: Open)

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 2.2.0, 3.0.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-9361) Strictly define the expected behavior of filesystem APIs and write tests to verify compliance

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956660#comment-13956660
 ] 

Hadoop QA commented on HADOOP-9361:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638079/HADOOP-9361-009.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 75 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3733//console

This message is automatically generated.

 Strictly define the expected behavior of filesystem APIs and write tests to 
 verify compliance
 -

 Key: HADOOP-9361
 URL: https://issues.apache.org/jira/browse/HADOOP-9361
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, test
Affects Versions: 3.0.0, 2.2.0, 2.4.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9361-001.patch, HADOOP-9361-002.patch, 
 HADOOP-9361-003.patch, HADOOP-9361-004.patch, HADOOP-9361-005.patch, 
 HADOOP-9361-006.patch, HADOOP-9361-007.patch, HADOOP-9361-008.patch, 
 HADOOP-9361-009.patch


 {{FileSystem}} and {{FileContract}} aren't tested rigorously enough -while 
 HDFS gets tested downstream, other filesystems, such as blobstore bindings, 
 don't.
 The only tests that are common are those of {{FileSystemContractTestBase}}, 
 which HADOOP-9258 shows is incomplete.
 I propose 
 # writing more tests which clarify expected behavior
 # testing operations in the interface being in their own JUnit4 test classes, 
 instead of one big test suite. 
 # Having each FS declare via a properties file what behaviors they offer, 
 such as atomic-rename, atomic-delete, umask, immediate-consistency -test 
 methods can downgrade to skipped test cases if a feature is missing.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10360) Use 2 network adapter In hdfs read and write

2014-04-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956687#comment-13956687
 ] 

Steve Loughran commented on HADOOP-10360:
-

many companies use 2x1 bonded; I know of at least two using 4x1 bonded. If you 
are having problems bonding then you are using older OS/NIC firmware, or your 
switch vendor lied when they said they supported it.

Closing as wontfix

 Use 2 network adapter In hdfs read and write
 

 Key: HADOOP-10360
 URL: https://issues.apache.org/jira/browse/HADOOP-10360
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: guodongdong
Priority: Minor
 Fix For: 2.4.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10360) Use 2 network adapter In hdfs read and write

2014-04-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10360.
-

Resolution: Won't Fix

 Use 2 network adapter In hdfs read and write
 

 Key: HADOOP-10360
 URL: https://issues.apache.org/jira/browse/HADOOP-10360
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: guodongdong
Priority: Minor
 Fix For: 2.4.0






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10414) Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml

2014-04-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10414:


Description: In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property 
for the RefreshUserMappingsProtocol service changed from 
security.refresh.usertogroups.mappings.protocol.acl to 
security.refresh.user.mappings.protocol.acl, but the example in 
hadoop-policy.xml was not updated. The example should be fixed to avoid 
confusion.  (was: In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property 
for the RefreshUserMappingsProtocol service changed form 
security.refresh.usertogroups.mappings.protocol.acl to 
security.refresh.user.mappings.protocol.acl, but the example in 
hadoop-policy.xml was not updated. The example should be fixed to avoid 
confusion.)

 Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml
 ---

 Key: HADOOP-10414
 URL: https://issues.apache.org/jira/browse/HADOOP-10414
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Joey Echeverria
Assignee: Joey Echeverria
 Attachments: HADOOP-10414.patch


 In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property for the 
 RefreshUserMappingsProtocol service changed from 
 security.refresh.usertogroups.mappings.protocol.acl to 
 security.refresh.user.mappings.protocol.acl, but the example in 
 hadoop-policy.xml was not updated. The example should be fixed to avoid 
 confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10414) Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml

2014-04-01 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956709#comment-13956709
 ] 

Aaron T. Myers commented on HADOOP-10414:
-

I agree - no test needed for this simple fix.

+1, the patch looks good to me. I'm going to commit this momentarily.

 Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml
 ---

 Key: HADOOP-10414
 URL: https://issues.apache.org/jira/browse/HADOOP-10414
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Joey Echeverria
Assignee: Joey Echeverria
 Attachments: HADOOP-10414.patch


 In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property for the 
 RefreshUserMappingsProtocol service changed from 
 security.refresh.usertogroups.mappings.protocol.acl to 
 security.refresh.user.mappings.protocol.acl, but the example in 
 hadoop-policy.xml was not updated. The example should be fixed to avoid 
 confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10414) Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml

2014-04-01 Thread Aaron T. Myers (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron T. Myers updated HADOOP-10414:


  Resolution: Fixed
   Fix Version/s: 2.5.0
Target Version/s: 2.5.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I've just committed this to trunk and branch-2.

Thanks a lot for the contribution, Joey.

 Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml
 ---

 Key: HADOOP-10414
 URL: https://issues.apache.org/jira/browse/HADOOP-10414
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Joey Echeverria
Assignee: Joey Echeverria
 Fix For: 2.5.0

 Attachments: HADOOP-10414.patch


 In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property for the 
 RefreshUserMappingsProtocol service changed from 
 security.refresh.usertogroups.mappings.protocol.acl to 
 security.refresh.user.mappings.protocol.acl, but the example in 
 hadoop-policy.xml was not updated. The example should be fixed to avoid 
 confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10414) Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13956719#comment-13956719
 ] 

Hudson commented on HADOOP-10414:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5442 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5442/])
HADOOP-10414. Incorrect property name for RefreshUserMappingProtocol in 
hadoop-policy.xml. Contributed by Joey Echeverria. (atm: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583729)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-policy.xml


 Incorrect property name for RefreshUserMappingProtocol in hadoop-policy.xml
 ---

 Key: HADOOP-10414
 URL: https://issues.apache.org/jira/browse/HADOOP-10414
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.3.0
Reporter: Joey Echeverria
Assignee: Joey Echeverria
 Fix For: 2.5.0

 Attachments: HADOOP-10414.patch


 In HDFS-1096 and MAPREDUCE-1836, the name of the ACL property for the 
 RefreshUserMappingsProtocol service changed from 
 security.refresh.usertogroups.mappings.protocol.acl to 
 security.refresh.user.mappings.protocol.acl, but the example in 
 hadoop-policy.xml was not updated. The example should be fixed to avoid 
 confusion.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10409) Bzip2 error message isn't clear

2014-04-01 Thread Travis Thompson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957048#comment-13957048
 ] 

Travis Thompson commented on HADOOP-10409:
--

I'll be updating Building.txt in HADOOP-10452 shortly.

 Bzip2 error message isn't clear
 ---

 Key: HADOOP-10409
 URL: https://issues.apache.org/jira/browse/HADOOP-10409
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.3.0
Reporter: Travis Thompson

 If you compile hadoop without {{bzip2-devel}} installed (on RHEL), bzip2 
 doesn't get compiled into libhadoop, as is expected.  This is not documented 
 however and the error message thrown from {{hadoop checknative -a}} is not 
 helpful.
 {noformat}
 [tthompso@eat1-hcl4060 bin]$ hadoop checknative -a
 14/03/13 00:51:02 WARN bzip2.Bzip2Factory: Failed to load/initialize 
 native-bzip2 library system-native, will use pure-Java version
 14/03/13 00:51:02 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.li7-1-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  false 
 14/03/13 00:51:02 INFO util.ExitUtil: Exiting with status 1
 {noformat}
 You can see that it wasn't compiled in here:
 {noformat}
 [mislam@eat1-hcl4060 ~]$ strings 
 /export/apps/hadoop/latest/lib/native/libhadoop.so | grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 {noformat}
 After installing bzip2-devel and recompiling:
 {noformat}
 [tthompso@eat1-hcl4060 ~]$ hadoop checknative -a
 14/03/14 23:00:08 INFO bzip2.Bzip2Factory: Successfully loaded  initialized 
 native-bzip2 library system-native
 14/03/14 23:00:08 INFO zlib.ZlibFactory: Successfully loaded  initialized 
 native-zlib library
 Native library checking:
 hadoop: true 
 /export/apps/hadoop/hadoop-2.3.0.11-2-bin/lib/native/libhadoop.so.1.0.0
 zlib:   true /lib64/libz.so.1
 snappy: true /usr/lib64/libsnappy.so.1
 lz4:true revision:99
 bzip2:  true /lib64/libbz2.so.1
 {noformat}
 {noformat}
 tthompso@esv4-hcl261:~/hadoop-common(li-2.3.0⚡) » strings 
 ./hadoop-common-project/hadoop-common/target/native/target/usr/local/lib/libhadoop.so
  |grep initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Compressor_initIDs
 Java_org_apache_hadoop_io_compress_lz4_Lz4Decompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyCompressor_initIDs
 Java_org_apache_hadoop_io_compress_snappy_SnappyDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibCompressor_initIDs
 Java_org_apache_hadoop_io_compress_zlib_ZlibDecompressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Compressor_initIDs
 Java_org_apache_hadoop_io_compress_bzip2_Bzip2Decompressor_initIDs
 {noformat}
 The error message thrown should hint that perhaps libhadoop wasn't compiled 
 with the bzip2 headers installed.  It would also be nice if compile time 
 dependencies were documented somewhere... :)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HADOOP-10455:


 Summary: When there is an exception, ipc.Server should first check 
whether it is an terse exception
 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


ipc.Server allows application servers to define terse exceptions; see 
Server.addTerseExceptions.  For terse exception, it only prints a short message 
but not the stack trace.  However, if an exception is both RuntimeException and 
terse exception, it still prints out the stack trace of the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10455:
-

Status: Patch Available  (was: Open)

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10455:
-

Attachment: c10455_20140401.patch

c10455_20140401.patch: check isTerse first.

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957129#comment-13957129
 ] 

Hadoop QA commented on HADOOP-10455:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12638143/c10455_20140401.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3734//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/3734//console

This message is automatically generated.

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957197#comment-13957197
 ] 

Jing Zhao commented on HADOOP-10455:


Only one nit: there are two spaces before Way. +1 with or without changing 
this.
{code}
// Don't log the whole stack trace.  Way too noisy!
{code}

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10455:
-

Attachment: c10455_20140401b.patch

Thanks Jing.  Here is a patch without two spaces.  Since it is only a white 
space change, I will commit it without waiting for Jenkins.

c10455_20140401b.patch

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch, c10455_20140401b.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze updated HADOOP-10455:
-

Status: Open  (was: Patch Available)

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Attachments: c10455_20140401.patch, c10455_20140401b.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Tsz Wo Nicholas Sze (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo Nicholas Sze resolved HADOOP-10455.
--

   Resolution: Fixed
Fix Version/s: 2.4.1
 Hadoop Flags: Reviewed

I have committed this.

 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.4.1

 Attachments: c10455_20140401.patch, c10455_20140401b.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10455) When there is an exception, ipc.Server should first check whether it is an terse exception

2014-04-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957217#comment-13957217
 ] 

Hudson commented on HADOOP-10455:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #5444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5444/])
HADOOP-10455. When there is an exception, ipc.Server should first check whether 
it is an terse exception. (szetszwo: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1583842)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java


 When there is an exception, ipc.Server should first check whether it is an 
 terse exception
 --

 Key: HADOOP-10455
 URL: https://issues.apache.org/jira/browse/HADOOP-10455
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
 Fix For: 2.4.1

 Attachments: c10455_20140401.patch, c10455_20140401b.patch


 ipc.Server allows application servers to define terse exceptions; see 
 Server.addTerseExceptions.  For terse exception, it only prints a short 
 message but not the stack trace.  However, if an exception is both 
 RuntimeException and terse exception, it still prints out the stack trace of 
 the exception.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-01 Thread Nishkam Ravi (JIRA)
Nishkam Ravi created HADOOP-10456:
-

 Summary: Bug in Configuration.java exposed by Spark 
(ConcurrentModificationException)
 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Nishkam Ravi


The following exception occurs non-deterministically:
java.util.ConcurrentModificationException
at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
at java.util.HashMap$KeyIterator.next(HashMap.java:960)
at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
at java.util.HashSet.init(HashSet.java:117)
at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
at org.apache.spark.scheduler.Task.run(Task.scala:53)
at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
at 
org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at 
org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-01 Thread Nishkam Ravi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13957239#comment-13957239
 ] 

Nishkam Ravi commented on HADOOP-10456:
---

Patch attached. 

 Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
 

 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Nishkam Ravi
 Attachments: nravi_Conf_Spark-1388.patch


 The following exception occurs non-deterministically:
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
 at java.util.HashMap$KeyIterator.next(HashMap.java:960)
 at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
 at java.util.HashSet.init(HashSet.java:117)
 at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
 at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
 at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10456) Bug in Configuration.java exposed by Spark (ConcurrentModificationException)

2014-04-01 Thread Nishkam Ravi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nishkam Ravi updated HADOOP-10456:
--

Attachment: nravi_Conf_Spark-1388.patch

 Bug in Configuration.java exposed by Spark (ConcurrentModificationException)
 

 Key: HADOOP-10456
 URL: https://issues.apache.org/jira/browse/HADOOP-10456
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Nishkam Ravi
 Attachments: nravi_Conf_Spark-1388.patch


 The following exception occurs non-deterministically:
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:926)
 at java.util.HashMap$KeyIterator.next(HashMap.java:960)
 at java.util.AbstractCollection.addAll(AbstractCollection.java:341)
 at java.util.HashSet.init(HashSet.java:117)
 at org.apache.hadoop.conf.Configuration.init(Configuration.java:671)
 at org.apache.hadoop.mapred.JobConf.init(JobConf.java:439)
 at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:110)
 at org.apache.spark.rdd.HadoopRDD$$anon$1.init(HadoopRDD.scala:154)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:149)
 at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:64)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:34)
 at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:241)
 at org.apache.spark.rdd.RDD.iterator(RDD.scala:232)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:161)
 at 
 org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:102)
 at org.apache.spark.scheduler.Task.run(Task.scala:53)
 at 
 org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:213)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
 at 
 org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at 
 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
 at 
 org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
 at 
 org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:178)
 at 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:744)



--
This message was sent by Atlassian JIRA
(v6.2#6252)