[jira] [Resolved] (HADOOP-10165) TestMetricsSystemImpl#testMultiThreadedPublish occasionally fails

2013-12-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-10165.


Resolution: Duplicate

Closing this issue as duplicate.

 TestMetricsSystemImpl#testMultiThreadedPublish occasionally fails
 -

 Key: HADOOP-10165
 URL: https://issues.apache.org/jira/browse/HADOOP-10165
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Priority: Minor

 From 
 https://builds.apache.org/job/Hadoop-Common-trunk/982/testReport/junit/org.apache.hadoop.metrics2.impl/TestMetricsSystemImpl/testMultiThreadedPublish/
  :
 {code}
 Error Message
 Passed
 Passed
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Passed
 Stacktrace
 java.lang.AssertionError: Passed
 Passed
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Metric not collected!
 Passed
   at org.junit.Assert.fail(Assert.java:93)
   at org.junit.Assert.assertTrue(Assert.java:43)
   at 
 org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl.testMultiThreadedPublish(TestMetricsSystemImpl.java:233)
 Standard Output
 2013-12-15 09:14:49,144 INFO  impl.MetricsConfig 
 (MetricsConfig.java:loadFirst(111)) - loaded properties from 
 hadoop-metrics2-test.properties
 2013-12-15 09:14:49,146 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:startTimer(341)) - Scheduled snapshot period at 80 
 second(s).
 2013-12-15 09:14:49,146 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:start(183)) - Test metrics system started
 2013-12-15 09:14:49,147 INFO  impl.MetricsSinkAdapter 
 (MetricsSinkAdapter.java:start(190)) - Sink Collector started
 2013-12-15 09:14:49,147 INFO  impl.MetricsSystemImpl 
 (MetricsSystemImpl.java:registerSink(275)) - Registered sink Collector
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.4#6159)


[jira] [Assigned] (HADOOP-10104) update jackson

2014-01-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-10104:
--

Assignee: Akira AJISAKA

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor

 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10104) update jackson

2014-01-05 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10104:
---

Attachment: HADOOP-10104.patch

Attaching a patch. I'll run all the unit tests locally.

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10104.patch


 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10104) update jackson

2014-01-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13863899#comment-13863899
 ] 

Akira AJISAKA commented on HADOOP-10104:


I built with the patch, and confirmed all the test passed.

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10104.patch


 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10104) update jackson

2014-01-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10104:
---

 Target Version/s: 2.4.0
Affects Version/s: 2.2.0
   Status: Patch Available  (was: Open)

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.2.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10104.patch


 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10104) update jackson

2014-01-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10104:
---

Attachment: HADOOP-10104.2.patch

Fixed javadoc warnings.

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.2.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10104.2.patch, HADOOP-10104.patch


 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10139) Single Cluster Setup document is unfriendly

2014-01-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10139:
---

Attachment: HADOOP-10139.patch

Attaching a patch.
I used SingleNodeSetup.apt.vm as a reference and applied MRv2 config.

 Single Cluster Setup document is unfriendly
 ---

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10139) Single Cluster Setup document is unfriendly

2014-01-07 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10139:
---

Status: Patch Available  (was: Open)

 Single Cluster Setup document is unfriendly
 ---

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10104) update jackson

2014-01-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13864117#comment-13864117
 ] 

Akira AJISAKA commented on HADOOP-10104:


The test failure looks unrelated to the patch.

 update jackson
 --

 Key: HADOOP-10104
 URL: https://issues.apache.org/jira/browse/HADOOP-10104
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Affects Versions: 2.2.0
Reporter: Steve Loughran
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10104.2.patch, HADOOP-10104.patch


 Jackson is now at 1.9.13, 
 [apparently|http://mvnrepository.com/artifact/org.codehaus.jackson/jackson-core-asl],
  hadoop 2.2 at 1.8.8.
 jackson isn't used that much in the code so risk from an update *should* be 
 low



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10210) JavaDoc Fixes

2014-01-07 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10210?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13865093#comment-13865093
 ] 

Akira AJISAKA commented on HADOOP-10210:


Thanks for taking this issue, [~airbots].
Two comments,

1. Avoid lines longer than 80 characters.
{code}
+ *   (e.g. {@link #setNumReduceTasks(int)}), some parameters interact subtly 
with
{code}

2. Fix Version is set after the patch is committed.

I'll renew the patch.

 JavaDoc Fixes
 -

 Key: HADOOP-10210
 URL: https://issues.apache.org/jira/browse/HADOOP-10210
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Ben Robie
Assignee: Chen He
Priority: Trivial
 Fix For: 2.2.0

 Attachments: hadoop-10210.patch

   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/mapred/InputFormat.html
 Instead of record boundaries are to respected
 Should be record boundaries are to be respected
 https://hadoop.apache.org/docs/r1.2.1/api/org/apache/hadoop/mapred/JobConf.html
 Instead of some parameters interact subtly rest of the framework
 Should be some parameters interact subtly with the rest of the framework



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-07 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10212:
--

 Summary: Incorrect compile command in Native Library document
 Key: HADOOP-10212
 URL: https://issues.apache.org/jira/browse/HADOOP-10212
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA


The following old command still exists in Native Library document.
{code}
   $ ant -Dcompile.native=true target
{code}
Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10212:
---

Attachment: HADOOP-10212.patch

Attaching a patch.

 Incorrect compile command in Native Library document
 

 Key: HADOOP-10212
 URL: https://issues.apache.org/jira/browse/HADOOP-10212
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10212.patch


 The following old command still exists in Native Library document.
 {code}
$ ant -Dcompile.native=true target
 {code}
 Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-08 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10212:
---

Assignee: Akira AJISAKA
  Status: Patch Available  (was: Open)

 Incorrect compile command in Native Library document
 

 Key: HADOOP-10212
 URL: https://issues.apache.org/jira/browse/HADOOP-10212
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10212.patch


 The following old command still exists in Native Library document.
 {code}
$ ant -Dcompile.native=true target
 {code}
 Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10212) Incorrect compile command in Native Library document

2014-01-08 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866299#comment-13866299
 ] 

Akira AJISAKA commented on HADOOP-10212:


The patch only fixes the document, so findbugs warnings are not related to the 
patch.

 Incorrect compile command in Native Library document
 

 Key: HADOOP-10212
 URL: https://issues.apache.org/jira/browse/HADOOP-10212
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10212.patch


 The following old command still exists in Native Library document.
 {code}
$ ant -Dcompile.native=true target
 {code}
 Now maven is used instead of ant.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10139) Single Cluster Setup document is unfriendly

2014-01-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13866504#comment-13866504
 ] 

Akira AJISAKA commented on HADOOP-10139:


The findbugs warnings are not related to the patch.

 Single Cluster Setup document is unfriendly
 ---

 Key: HADOOP-10139
 URL: https://issues.apache.org/jira/browse/HADOOP-10139
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-10139.patch


 The document should be understandable to a newcomer because the first place 
 he will go is setup a single node.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10231) Add some components in Native Libraries document

2014-01-13 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10231:
--

 Summary: Add some components in Native Libraries document
 Key: HADOOP-10231
 URL: https://issues.apache.org/jira/browse/HADOOP-10231
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Minor


The documented components in Native Libraries are only zlib and gzip.
Now Native Libraries includes some other components such as other compression 
formats (lz4, snappy), libhdfs and fuse module. These components should be 
documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10231) Add some components in Native Libraries document

2014-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10231:
---

Labels: newbie  (was: )

 Add some components in Native Libraries document
 

 Key: HADOOP-10231
 URL: https://issues.apache.org/jira/browse/HADOOP-10231
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie

 The documented components in Native Libraries are only zlib and gzip.
 Now Native Libraries includes some other components such as other compression 
 formats (lz4, snappy), libhdfs and fuse module. These components should be 
 documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-8642) io.native.lib.available only controls zlib

2014-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-8642:
-

Assignee: Akira AJISAKA

 io.native.lib.available only controls zlib
 --

 Key: HADOOP-8642
 URL: https://issues.apache.org/jira/browse/HADOOP-8642
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Akira AJISAKA

 Per core-default.xml {{io.native.lib.available}} indicates Should native 
 hadoop libraries, if present, be used however it looks like it only affects 
 zlib. Since we always load the native library this means we may use native 
 libraries even if io.native.lib.available is set to false.
 Let's make the flag to work as advertised - rather than always load the 
 native hadoop library we only attempt to load the library (and report that 
 native is available) if this flag is set. Since io.native.lib.available 
 defaults to true the default behavior should remain unchanged (except that 
 now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Attachment: HADOOP-8642.patch

Attaching a patch.
Now NativeCodeLoader always try to load native library in static method.
The patch will check configuration in the method. If io.native.lib.available is 
false, NativeCodeLoader never try to load native library.

 io.native.lib.available only controls zlib
 --

 Key: HADOOP-8642
 URL: https://issues.apache.org/jira/browse/HADOOP-8642
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Akira AJISAKA
 Attachments: HADOOP-8642.patch


 Per core-default.xml {{io.native.lib.available}} indicates Should native 
 hadoop libraries, if present, be used however it looks like it only affects 
 zlib. Since we always load the native library this means we may use native 
 libraries even if io.native.lib.available is set to false.
 Let's make the flag to work as advertised - rather than always load the 
 native hadoop library we only attempt to load the library (and report that 
 native is available) if this flag is set. Since io.native.lib.available 
 defaults to true the default behavior should remain unchanged (except that 
 now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Status: Patch Available  (was: Open)

 io.native.lib.available only controls zlib
 --

 Key: HADOOP-8642
 URL: https://issues.apache.org/jira/browse/HADOOP-8642
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Akira AJISAKA
 Attachments: HADOOP-8642.patch


 Per core-default.xml {{io.native.lib.available}} indicates Should native 
 hadoop libraries, if present, be used however it looks like it only affects 
 zlib. Since we always load the native library this means we may use native 
 libraries even if io.native.lib.available is set to false.
 Let's make the flag to work as advertised - rather than always load the 
 native hadoop library we only attempt to load the library (and report that 
 native is available) if this flag is set. Since io.native.lib.available 
 defaults to true the default behavior should remain unchanged (except that 
 now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-8642) io.native.lib.available only controls zlib

2014-01-14 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-8642:
--

Attachment: HADOOP-8642.2.patch

Renewed the patch to pass the failed tests.

 io.native.lib.available only controls zlib
 --

 Key: HADOOP-8642
 URL: https://issues.apache.org/jira/browse/HADOOP-8642
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Akira AJISAKA
 Attachments: HADOOP-8642.2.patch, HADOOP-8642.patch


 Per core-default.xml {{io.native.lib.available}} indicates Should native 
 hadoop libraries, if present, be used however it looks like it only affects 
 zlib. Since we always load the native library this means we may use native 
 libraries even if io.native.lib.available is set to false.
 Let's make the flag to work as advertised - rather than always load the 
 native hadoop library we only attempt to load the library (and report that 
 native is available) if this flag is set. Since io.native.lib.available 
 defaults to true the default behavior should remain unchanged (except that 
 now we wont actually try to load the library if this flag is disabled).



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-9648) Fix build native library on mac osx

2014-01-14 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13870704#comment-13870704
 ] 

Akira AJISAKA commented on HADOOP-9648:
---

LGTM, I could compile native library with the v2 patch on my MacBook (OS X 
10.9).

 Fix build native library on mac osx
 ---

 Key: HADOOP-9648
 URL: https://issues.apache.org/jira/browse/HADOOP-9648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.4, 1.2.0, 1.1.2, 2.0.5-alpha
Reporter: Kirill A. Korinskiy
Assignee: Binglin Chang
 Attachments: HADOOP-9648-native-osx.1.0.4.patch, 
 HADOOP-9648-native-osx.1.1.2.patch, HADOOP-9648-native-osx.1.2.0.patch, 
 HADOOP-9648-native-osx.2.0.5-alpha-rc1.patch, HADOOP-9648.v2.patch


 Some patches for fixing build a hadoop native library on os x 10.7/10.8.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-14 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10236:
--

 Summary: Fix typo in o.a.h.ipc.Client#checkResponse
 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Trivial


There's a typo in o.a.h.ipc.Client.java. 
{code}
  throw new IOException(Client IDs not matched: local ID=
  + StringUtils.byteToHexString(clientId) + , ID in reponse=
  + 
StringUtils.byteToHexString(header.getClientId().toByteArray()));
{code}
It should be fixed as follows:
{code}
  throw new IOException(Client IDs not matched: local ID=
  + StringUtils.byteToHexString(clientId) + , ID in response=
  + 
StringUtils.byteToHexString(header.getClientId().toByteArray()));
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10236:
---

Assignee: Akira AJISAKA
  Status: Patch Available  (was: Open)

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10236:
---

Attachment: HADOOP-10236.patch

Attaching a patch.

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13871907#comment-13871907
 ] 

Akira AJISAKA commented on HADOOP-10236:


The patch only fixes a typo, so no new tests are needed.
In addition, the failed test looks unrelated to the patch.

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Assigned] (HADOOP-10105) remove httpclient dependency

2014-01-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-10105:
--

Assignee: Akira AJISAKA

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor

 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Work started] (HADOOP-10105) remove httpclient dependency

2014-01-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-10105 started by Akira AJISAKA.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor

 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10105) remove httpclient dependency

2014-01-15 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10105:
---

Attachment: HADOOP-10105.part.patch

Attaching a patch to remove httpclient dependency in WebAppProxyServlet.java. 
I'll try to remove the dependency in other (more than 10) classes.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.part.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10236) Fix typo in o.a.h.ipc.Client#checkResponse

2014-01-15 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13872908#comment-13872908
 ] 

Akira AJISAKA commented on HADOOP-10236:


Thank you for committing, [~sureshms]!

 Fix typo in o.a.h.ipc.Client#checkResponse
 --

 Key: HADOOP-10236
 URL: https://issues.apache.org/jira/browse/HADOOP-10236
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Trivial
  Labels: newbie
 Fix For: 2.4.0

 Attachments: HADOOP-10236.patch


 There's a typo in o.a.h.ipc.Client.java. 
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in reponse=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}
 It should be fixed as follows:
 {code}
   throw new IOException(Client IDs not matched: local ID=
   + StringUtils.byteToHexString(clientId) + , ID in response=
   + 
 StringUtils.byteToHexString(header.getClientId().toByteArray()));
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10105) remove httpclient dependency

2014-01-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10105:
---

Attachment: HADOOP-10105.part2.patch

Attaching a patch to remove httpclient dependency except hadoop-openstack 
project.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.part.patch, HADOOP-10105.part2.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10105) remove httpclient dependency

2014-01-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10105:
---

Attachment: HADOOP-10105.patch

Attaching a patch.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.part.patch, HADOOP-10105.part2.patch, 
 HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10105) remove httpclient dependency

2014-01-16 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10105:
---

Status: Patch Available  (was: In Progress)

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.part.patch, HADOOP-10105.part2.patch, 
 HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10105) remove httpclient dependency

2014-01-17 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10105:
---

Attachment: HADOOP-10105.2.patch

Attaching a patch to fix FindBug warnings and to pass TestJobEndNotifier.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10231) Add some components in Native Libraries document

2014-01-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10231:
---

Assignee: Akira AJISAKA
Target Version/s: 2.4.0
  Status: Patch Available  (was: Open)

 Add some components in Native Libraries document
 

 Key: HADOOP-10231
 URL: https://issues.apache.org/jira/browse/HADOOP-10231
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10231.patch


 The documented components in Native Libraries are only zlib and gzip.
 Now Native Libraries includes some other components such as other compression 
 formats (lz4, snappy), libhdfs and fuse module. These components should be 
 documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10231) Add some components in Native Libraries document

2014-01-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10231:
---

Attachment: HADOOP-10231.patch

Attaching a patch to add some components of libhadoop.so and a link to 
libhdfs.so.

 Add some components in Native Libraries document
 

 Key: HADOOP-10231
 URL: https://issues.apache.org/jira/browse/HADOOP-10231
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Affects Versions: 2.2.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10231.patch


 The documented components in Native Libraries are only zlib and gzip.
 Now Native Libraries includes some other components such as other compression 
 formats (lz4, snappy), libhdfs and fuse module. These components should be 
 documented.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HADOOP-10105) remove httpclient dependency

2014-01-21 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13877336#comment-13877336
 ] 

Akira AJISAKA commented on HADOOP-10105:


I need to run the unit tests of hadoop-openstack project locally because they 
are skipped in Jenkins. I'll try it.

 remove httpclient dependency
 

 Key: HADOOP-10105
 URL: https://issues.apache.org/jira/browse/HADOOP-10105
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Colin Patrick McCabe
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-10105.2.patch, HADOOP-10105.part.patch, 
 HADOOP-10105.part2.patch, HADOOP-10105.patch


 httpclient is now end-of-life and is no longer being developed.  Now that we 
 have a dependency on {{httpcore}}, we should phase out our use of the old 
 discontinued {{httpclient}} library in Hadoop.  This will allow us to reduce 
 {{CLASSPATH}} bloat and get updated code.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10248) Property name should be included in the exception where property value is null

2014-01-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10248:
---

Attachment: HADOOP-10248.patch

Attaching a patch.

 Property name should be included in the exception where property value is null
 --

 Key: HADOOP-10248
 URL: https://issues.apache.org/jira/browse/HADOOP-10248
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Ted Yu
 Attachments: HADOOP-10248.patch


 I saw the following when trying to determine startup failure:
 {code}
 2014-01-21 06:07:17,871 FATAL 
 [master:h2-centos6-uns-1390276854-hbase-10:6] master.HMaster: Unhandled 
 exception. Starting shutdown.
 java.lang.IllegalArgumentException: Property value must not be null
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:958)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:940)
 at org.apache.hadoop.http.HttpServer.initializeWebServer(HttpServer.java:510)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:470)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:458)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:412)
 at org.apache.hadoop.hbase.util.InfoServer.init(InfoServer.java:59)
 {code}
 Property name should be included in the following exception:
 {code}
 Preconditions.checkArgument(
 value != null,
 Property value must not be null);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Updated] (HADOOP-10248) Property name should be included in the exception where property value is null

2014-01-21 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10248:
---

 Assignee: Akira AJISAKA
   Labels: newbie  (was: )
 Target Version/s: 2.4.0
Affects Version/s: 2.2.0
   Status: Patch Available  (was: Open)

 Property name should be included in the exception where property value is null
 --

 Key: HADOOP-10248
 URL: https://issues.apache.org/jira/browse/HADOOP-10248
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.2.0
Reporter: Ted Yu
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10248.patch


 I saw the following when trying to determine startup failure:
 {code}
 2014-01-21 06:07:17,871 FATAL 
 [master:h2-centos6-uns-1390276854-hbase-10:6] master.HMaster: Unhandled 
 exception. Starting shutdown.
 java.lang.IllegalArgumentException: Property value must not be null
 at com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:958)
 at org.apache.hadoop.conf.Configuration.set(Configuration.java:940)
 at org.apache.hadoop.http.HttpServer.initializeWebServer(HttpServer.java:510)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:470)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:458)
 at org.apache.hadoop.http.HttpServer.init(HttpServer.java:412)
 at org.apache.hadoop.hbase.util.InfoServer.init(InfoServer.java:59)
 {code}
 Property name should be included in the following exception:
 {code}
 Preconditions.checkArgument(
 value != null,
 Property value must not be null);
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HADOOP-11318) Update the document for hadoop fs -stat

2014-11-19 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11318:
--

 Summary: Update the document for hadoop fs -stat
 Key: HADOOP-11318
 URL: https://issues.apache.org/jira/browse/HADOOP-11318
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA


In FileSystemShell.apt.vm, 
{code}
stat

   Usage: hdfs dfs -stat URI [URI ...]

   Returns the stat information on the path.
{code}
Now {{-stat}} accepts the below formats.
 *   %b: Size of file in blocks
 *   %g: Group name of owner
 *   %n: Filename
 *   %o: Block size
 *   %r: replication
 *   %u: User name of owner
 *   %y: UTC date as quot;-MM-dd HH:mm:ssquot;
 *   %Y: Milliseconds since January 1, 1970 UTC

They should be documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11352) test-patch.sh actually runs no contrib tests

2014-12-03 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11352:
--

 Summary: test-patch.sh actually runs no contrib tests
 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira AJISAKA


Jenkins test-patch.sh always comments as {color:green}+1 contrib tests{color}. 
 The patch passed contrib unit tests., however, the script runs no contrib 
tests.
This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11352) test-patch.sh doesn't run hadoop-hdfs/src/contrib/bkjournal

2014-12-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14235168#comment-14235168
 ] 

Akira AJISAKA commented on HADOOP-11352:


bq. The only contrib source is under hadoop-hdfs/src/contrib/bkjournal
Oh, I was thinking that there were some contrib sources. Thanks for the 
information.

bq. the question becomes why didn't those tests run/get picked up.
HDFS-7097 did not change bkjournal at all, so the tests were not picked up. 
HDFS-7448 changed bkjournal test code, so the tests were picked up.

I suggest removing the script to run contrib tests from test-patch.sh because 
the script actually does no-op and the jira comment +1 contrib tests is 
confusing.

 test-patch.sh doesn't run hadoop-hdfs/src/contrib/bkjournal
 ---

 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira AJISAKA

 Jenkins test-patch.sh always comments as {color:green}+1 contrib 
 tests{color}.  The patch passed contrib unit tests., however, the script 
 runs no contrib tests.
 This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11352) test-patch.sh doesn't run hadoop-hdfs/src/contrib/bkjournal

2014-12-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11352:
---
Attachment: HADOOP-11352.patch

Attaching a patch to remove {{runContribTests}} and 
{{checkInjectSystemFaults}}. The latter is also no-op (always return 0).

 test-patch.sh doesn't run hadoop-hdfs/src/contrib/bkjournal
 ---

 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira AJISAKA
 Attachments: HADOOP-11352.patch


 Jenkins test-patch.sh always comments as {color:green}+1 contrib 
 tests{color}.  The patch passed contrib unit tests., however, the script 
 runs no contrib tests.
 This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11352) Clean up test-patch.sh to disable +1 contrib tests

2014-12-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11352:
---
Issue Type: Improvement  (was: Bug)
   Summary: Clean up test-patch.sh to disable +1 contrib tests  (was: 
test-patch.sh doesn't run hadoop-hdfs/src/contrib/bkjournal)

 Clean up test-patch.sh to disable +1 contrib tests
 

 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA
 Attachments: HADOOP-11352.patch


 Jenkins test-patch.sh always comments as {color:green}+1 contrib 
 tests{color}.  The patch passed contrib unit tests., however, the script 
 runs no contrib tests.
 This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11352) Clean up test-patch.sh to disable +1 contrib tests

2014-12-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11352:
---
Assignee: Akira AJISAKA
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

 Clean up test-patch.sh to disable +1 contrib tests
 

 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-11352.patch


 Jenkins test-patch.sh always comments as {color:green}+1 contrib 
 tests{color}.  The patch passed contrib unit tests., however, the script 
 runs no contrib tests.
 This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11352) Clean up test-patch.sh to disable +1 contrib tests

2014-12-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239346#comment-14239346
 ] 

Akira AJISAKA commented on HADOOP-11352:


Thanks Steve for the review and commit!

 Clean up test-patch.sh to disable +1 contrib tests
 

 Key: HADOOP-11352
 URL: https://issues.apache.org/jira/browse/HADOOP-11352
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Fix For: 3.0.0

 Attachments: HADOOP-11352.patch


 Jenkins test-patch.sh always comments as {color:green}+1 contrib 
 tests{color}.  The patch passed contrib unit tests., however, the script 
 runs no contrib tests.
 This issue was found when fixing HDFS-7448.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-12-09 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10480:
---
Assignee: Haohui Mai  (was: Akira AJISAKA)

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Haohui Mai
  Labels: newbie
 Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
 HADOOP-10480.2.patch, HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10480) Fix new findbugs warnings in hadoop-hdfs

2014-12-09 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14240645#comment-14240645
 ] 

Akira AJISAKA commented on HADOOP-10480:


Thanks [~wheat9] for rebasing. I don't mind if you take it over.

 Fix new findbugs warnings in hadoop-hdfs
 

 Key: HADOOP-10480
 URL: https://issues.apache.org/jira/browse/HADOOP-10480
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-10480.000.patch, HADOOP-10480.001.patch, 
 HADOOP-10480.2.patch, HADOOP-10480.patch


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-hdfs ---
 [INFO] BugInstance size is 14
 [INFO] Error size is 0
 [INFO] Total bugs: 14
 [INFO] Redundant nullcheck of curPeer, which is known to be non-null in 
 org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp() 
 [org.apache.hadoop.hdfs.BlockReaderFactory] At 
 BlockReaderFactory.java:[lines 68-808]
 [INFO] Increment of volatile field 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.restartingNodeIndex in 
 org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery()
  [org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer] At 
 DFSOutputStream.java:[lines 308-1492]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(DataOutputStream,
  DataInputStream, DataOutputStream, String, DataTransferThrottler, 
 DatanodeInfo[]): new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.BlockReceiver] At 
 BlockReceiver.java:[lines 66-905]
 [INFO] b must be nonnull but is marked as nullable 
 [org.apache.hadoop.hdfs.server.datanode.DatanodeJspHelper$2] At 
 DatanodeJspHelper.java:[lines 546-549]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addToReplicasMap(ReplicaMap,
  File, boolean): new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.loadDfsUsed():
  new java.util.Scanner(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.saveDfsUsed():
  new java.io.FileWriter(File) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice] At 
 BlockPoolSlice.java:[lines 58-427]
 [INFO] Redundant nullcheck of f, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.invalidate(String,
  Block[]) 
 [org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl] At 
 FsDatasetImpl.java:[lines 60-1910]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSImageUtil.static initializer for 
 FSImageUtil(): String.getBytes() 
 [org.apache.hadoop.hdfs.server.namenode.FSImageUtil] At 
 FSImageUtil.java:[lines 34-89]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getListingInt(String, 
 byte[], boolean): new String(byte[]) 
 [org.apache.hadoop.hdfs.server.namenode.FSNamesystem] At 
 FSNamesystem.java:[lines 301-7701]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.server.namenode.INode.dumpTreeRecursively(PrintStream):
  new java.io.PrintWriter(OutputStream, boolean) 
 [org.apache.hadoop.hdfs.server.namenode.INode] At INode.java:[lines 51-744]
 [INFO] Redundant nullcheck of fos, which is known to be non-null in 
 org.apache.hadoop.hdfs.server.namenode.NamenodeFsck.copyBlocksToLostFound(String,
  HdfsFileStatus, LocatedBlocks) 
 [org.apache.hadoop.hdfs.server.namenode.NamenodeFsck] At 
 NamenodeFsck.java:[lines 94-710]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(File) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB.run(String[]):
  new java.io.PrintWriter(OutputStream) 
 [org.apache.hadoop.hdfs.tools.offlineImageViewer.OfflineImageViewerPB] At 
 OfflineImageViewerPB.java:[lines 45-181]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk

2014-12-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251404#comment-14251404
 ] 

Akira AJISAKA commented on HADOOP-11125:


+1. The test failure is unrelated to the patch.

 TestOsSecureRandom sometimes fails in trunk
 ---

 Key: HADOOP-11125
 URL: https://issues.apache.org/jira/browse/HADOOP-11125
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-11125-1.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console :
 {code}
 Running org.apache.hadoop.crypto.random.TestOsSecureRandom
 Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec 
  FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom
 testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) 
  Time elapsed: 120.013 sec   ERROR!
 java.lang.Exception: test timed out after 12 milliseconds
   at java.io.FileInputStream.readBytes(Native Method)
   at java.io.FileInputStream.read(FileInputStream.java:220)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
   at java.io.InputStreamReader.read(InputStreamReader.java:167)
   at java.io.BufferedReader.fill(BufferedReader.java:136)
   at java.io.BufferedReader.read1(BufferedReader.java:187)
   at java.io.BufferedReader.read(BufferedReader.java:261)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
   at org.apache.hadoop.util.Shell.run(Shell.java:455)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
   at 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out

2014-12-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251408#comment-14251408
 ] 

Akira AJISAKA commented on HADOOP-11362:


[~ste...@apache.org], thanks for the report and the patch. Looks like this 
issue duplicates HADOOP-11125. May I close this?

 Test 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
 timing out
 

 Key: HADOOP-11362
 URL: https://issues.apache.org/jira/browse/HADOOP-11362
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: ASF Jenkins, Java 7  8
Reporter: Steve Loughran
 Attachments: 
 0001-HADOOP-11362-Test-org.apache.hadoop.crypto.random.Te.patch


 The test 
 {{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
  is timing out on jenkins + Java 8.
 This is probably the exec() operation. It may be transient, it may be a java 
 8 + shell problem. 
 do we actually need this test in its present form? If a test for file handle 
 leakage is really needed, attempting to create 64K instances of the OSRandom 
 object should do it without having to resort to some printing and manual 
 debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk

2014-12-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11125:
---
Target Version/s: 2.7.0  (was: 2.6.0)

 TestOsSecureRandom sometimes fails in trunk
 ---

 Key: HADOOP-11125
 URL: https://issues.apache.org/jira/browse/HADOOP-11125
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-11125-1.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console :
 {code}
 Running org.apache.hadoop.crypto.random.TestOsSecureRandom
 Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec 
  FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom
 testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) 
  Time elapsed: 120.013 sec   ERROR!
 java.lang.Exception: test timed out after 12 milliseconds
   at java.io.FileInputStream.readBytes(Native Method)
   at java.io.FileInputStream.read(FileInputStream.java:220)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
   at java.io.InputStreamReader.read(InputStreamReader.java:167)
   at java.io.BufferedReader.fill(BufferedReader.java:136)
   at java.io.BufferedReader.read1(BufferedReader.java:187)
   at java.io.BufferedReader.read(BufferedReader.java:261)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
   at org.apache.hadoop.util.Shell.run(Shell.java:455)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
   at 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11424) Fix failure for TestOsSecureRandom

2014-12-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251409#comment-14251409
 ] 

Akira AJISAKA commented on HADOOP-11424:


[~hitliuyi], thanks for the report and the patch. Looks like this issue 
duplicates HADOOP-11125. May I close this?

 Fix failure for TestOsSecureRandom
 --

 Key: HADOOP-11424
 URL: https://issues.apache.org/jira/browse/HADOOP-11424
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Reporter: Yi Liu
Assignee: Yi Liu
 Attachments: HADOOP-11424.001.patch


 Recently I usually see failure of {{testOsSecureRandomSetConf}} in 
 TestOsSecureRandom.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5298//testReport/org.apache.hadoop.crypto.random/TestOsSecureRandom/testOsSecureRandomSetConf/
 {code}
 java.lang.Exception: test timed out after 12 milliseconds
   at java.io.FileInputStream.readBytes(Native Method)
   at java.io.FileInputStream.read(FileInputStream.java:272)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:273)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
   at java.io.InputStreamReader.read(InputStreamReader.java:184)
   at java.io.BufferedReader.fill(BufferedReader.java:154)
   at java.io.BufferedReader.read1(BufferedReader.java:205)
   at java.io.BufferedReader.read(BufferedReader.java:279)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:735)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:531)
   at org.apache.hadoop.util.Shell.run(Shell.java:456)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
   at 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11125) TestOsSecureRandom sometimes fails in trunk

2014-12-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11125:
---
Priority: Major  (was: Minor)

 TestOsSecureRandom sometimes fails in trunk
 ---

 Key: HADOOP-11125
 URL: https://issues.apache.org/jira/browse/HADOOP-11125
 Project: Hadoop Common
  Issue Type: Test
Reporter: Ted Yu
  Labels: newbie
 Attachments: HADOOP-11125-1.patch


 From https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1897/console :
 {code}
 Running org.apache.hadoop.crypto.random.TestOsSecureRandom
 Tests run: 7, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 120.516 sec 
  FAILURE! - in org.apache.hadoop.crypto.random.TestOsSecureRandom
 testOsSecureRandomSetConf(org.apache.hadoop.crypto.random.TestOsSecureRandom) 
  Time elapsed: 120.013 sec   ERROR!
 java.lang.Exception: test timed out after 12 milliseconds
   at java.io.FileInputStream.readBytes(Native Method)
   at java.io.FileInputStream.read(FileInputStream.java:220)
   at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
   at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
   at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:264)
   at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:306)
   at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:158)
   at java.io.InputStreamReader.read(InputStreamReader.java:167)
   at java.io.BufferedReader.fill(BufferedReader.java:136)
   at java.io.BufferedReader.read1(BufferedReader.java:187)
   at java.io.BufferedReader.read(BufferedReader.java:261)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.parseExecResult(Shell.java:715)
   at org.apache.hadoop.util.Shell.runCommand(Shell.java:524)
   at org.apache.hadoop.util.Shell.run(Shell.java:455)
   at 
 org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
   at 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf(TestOsSecureRandom.java:149)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out

2014-12-18 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14251559#comment-14251559
 ] 

Akira AJISAKA commented on HADOOP-11362:


Thanks. You can review the patch in HADOOP-11125 and commit it. The patch looks 
the same as the patch in this issue.

 Test 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
 timing out
 

 Key: HADOOP-11362
 URL: https://issues.apache.org/jira/browse/HADOOP-11362
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: ASF Jenkins, Java 7  8
Reporter: Steve Loughran
 Attachments: 
 0001-HADOOP-11362-Test-org.apache.hadoop.crypto.random.Te.patch


 The test 
 {{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
  is timing out on jenkins + Java 8.
 This is probably the exec() operation. It may be transient, it may be a java 
 8 + shell problem. 
 do we actually need this test in its present form? If a test for file handle 
 leakage is really needed, attempting to create 64K instances of the OSRandom 
 object should do it without having to resort to some printing and manual 
 debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out

2014-12-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11362:
---
  Resolution: Duplicate
Target Version/s:   (was: 3.0.0)
  Status: Resolved  (was: Patch Available)

 Test 
 org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
 timing out
 

 Key: HADOOP-11362
 URL: https://issues.apache.org/jira/browse/HADOOP-11362
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: ASF Jenkins, Java 7  8
Reporter: Steve Loughran
 Attachments: 
 0001-HADOOP-11362-Test-org.apache.hadoop.crypto.random.Te.patch


 The test 
 {{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
  is timing out on jenkins + Java 8.
 This is probably the exec() operation. It may be transient, it may be a java 
 8 + shell problem. 
 do we actually need this test in its present form? If a test for file handle 
 leakage is really needed, attempting to create 64K instances of the OSRandom 
 object should do it without having to resort to some printing and manual 
 debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11318) Update the document for hadoop fs -stat

2014-12-18 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-11318:
--

Assignee: Akira AJISAKA

 Update the document for hadoop fs -stat
 ---

 Key: HADOOP-11318
 URL: https://issues.apache.org/jira/browse/HADOOP-11318
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie

 In FileSystemShell.apt.vm, 
 {code}
 stat
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
 {code}
 Now {{-stat}} accepts the below formats.
  *   %b: Size of file in blocks
  *   %g: Group name of owner
  *   %n: Filename
  *   %o: Block size
  *   %r: replication
  *   %u: User name of owner
  *   %y: UTC date as quot;-MM-dd HH:mm:ssquot;
  *   %Y: Milliseconds since January 1, 1970 UTC
 They should be documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11318) Update the document for hadoop fs -stat

2014-12-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11318:
---
Attachment: HADOOP-11318-001.patch

Attaching a patch to
* Update FileSystemShell.apt.vm
* Add %F option to help message and javadoc
* Replaced '\n' by System.getProperty(line.separator) to work with Windows.

 Update the document for hadoop fs -stat
 ---

 Key: HADOOP-11318
 URL: https://issues.apache.org/jira/browse/HADOOP-11318
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-11318-001.patch


 In FileSystemShell.apt.vm, 
 {code}
 stat
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
 {code}
 Now {{-stat}} accepts the below formats.
  *   %b: Size of file in blocks
  *   %g: Group name of owner
  *   %n: Filename
  *   %o: Block size
  *   %r: replication
  *   %u: User name of owner
  *   %y: UTC date as quot;-MM-dd HH:mm:ssquot;
  *   %Y: Milliseconds since January 1, 1970 UTC
 They should be documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11318) Update the document for hadoop fs -stat

2014-12-19 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11318:
---
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

 Update the document for hadoop fs -stat
 ---

 Key: HADOOP-11318
 URL: https://issues.apache.org/jira/browse/HADOOP-11318
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-11318-001.patch


 In FileSystemShell.apt.vm, 
 {code}
 stat
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
 {code}
 Now {{-stat}} accepts the below formats.
  *   %b: Size of file in blocks
  *   %g: Group name of owner
  *   %n: Filename
  *   %o: Block size
  *   %r: replication
  *   %u: User name of owner
  *   %y: UTC date as quot;-MM-dd HH:mm:ssquot;
  *   %Y: Milliseconds since January 1, 1970 UTC
 They should be documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11318) Update the document for hadoop fs -stat

2014-12-20 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11318:
---
Attachment: HADOOP-11318-002.patch

Fixed test failure.

 Update the document for hadoop fs -stat
 ---

 Key: HADOOP-11318
 URL: https://issues.apache.org/jira/browse/HADOOP-11318
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: newbie
 Attachments: HADOOP-11318-001.patch, HADOOP-11318-002.patch


 In FileSystemShell.apt.vm, 
 {code}
 stat
Usage: hdfs dfs -stat URI [URI ...]
Returns the stat information on the path.
 {code}
 Now {{-stat}} accepts the below formats.
  *   %b: Size of file in blocks
  *   %g: Group name of owner
  *   %n: Filename
  *   %o: Block size
  *   %r: replication
  *   %u: User name of owner
  *   %y: UTC date as quot;-MM-dd HH:mm:ssquot;
  *   %Y: Milliseconds since January 1, 1970 UTC
 They should be documented.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11414) Close of Reader should be enclosed in finally block in FileBasedIPList#readLines()

2014-12-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254835#comment-14254835
 ] 

Akira AJISAKA commented on HADOOP-11414:


Two minor comments:
{code}
   * @throws IOException
{code}
1. The method does not throw IOException, would you remove the above line?
2. Would you remove unused imports?

 Close of Reader should be enclosed in finally block in 
 FileBasedIPList#readLines()
 --

 Key: HADOOP-11414
 URL: https://issues.apache.org/jira/browse/HADOOP-11414
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-11414.1.patch, HADOOP-11414.2.patch, 
 HADOOP-11414.3.patch


 {code}
   Reader fileReader = new InputStreamReader(
   new FileInputStream(file), Charsets.UTF_8);
   BufferedReader bufferedReader = new BufferedReader(fileReader);
   ListString lines = new ArrayListString();
   String line = null;
   while ((line = bufferedReader.readLine()) != null) {
 lines.add(line);
   }
   bufferedReader.close();
 {code}
 Since bufferedReader.readLine() may throw IOE, so the close of bufferedReader 
 should be enclosed within finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11414) Close of Reader should be enclosed in finally block in FileBasedIPList#readLines()

2014-12-20 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14254865#comment-14254865
 ] 

Akira AJISAKA commented on HADOOP-11414:


Thanks for the clean up, Tsuyoshi. +1 pending Jenkins.

 Close of Reader should be enclosed in finally block in 
 FileBasedIPList#readLines()
 --

 Key: HADOOP-11414
 URL: https://issues.apache.org/jira/browse/HADOOP-11414
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Ted Yu
Assignee: Tsuyoshi OZAWA
Priority: Minor
 Attachments: HADOOP-11414.1.patch, HADOOP-11414.2.patch, 
 HADOOP-11414.3.patch, HADOOP-11414.4.patch


 {code}
   Reader fileReader = new InputStreamReader(
   new FileInputStream(file), Charsets.UTF_8);
   BufferedReader bufferedReader = new BufferedReader(fileReader);
   ListString lines = new ArrayListString();
   String line = null;
   while ((line = bufferedReader.readLine()) != null) {
 lines.add(line);
   }
   bufferedReader.close();
 {code}
 Since bufferedReader.readLine() may throw IOE, so the close of bufferedReader 
 should be enclosed within finally block.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11440) Use test.build.dir instead of build.test.dir for testing in ClientBaseWithFixes

2014-12-21 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11440:
--

 Summary: Use test.build.dir instead of build.test.dir for 
testing in ClientBaseWithFixes
 Key: HADOOP-11440
 URL: https://issues.apache.org/jira/browse/HADOOP-11440
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Priority: Minor


In ClientBaseWithFixes.java, the base directory for tests are set in the 
following:
{code}
static final File BASETEST =
new File(System.getProperty(build.test.dir, build));
{code}
There is no property build.test.dir, so {{BASETEST}} is always build. We 
should use test.build.dir instead of build.test.dir.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11448) Fix findbugs warnings in hadoop-common

2014-12-23 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-11448:
--

 Summary: Fix findbugs warnings in hadoop-common
 Key: HADOOP-11448
 URL: https://issues.apache.org/jira/browse/HADOOP-11448
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Akira AJISAKA


Now there are 3 findbugs warnings in hadoop-common package.
https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html

{code}
Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At NetUtils.java:[line 
291]
Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11448) Fix findbugs warnings in hadoop-common

2014-12-23 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257950#comment-14257950
 ] 

Akira AJISAKA commented on HADOOP-11448:


I found this issue duplicates HADOOP-11433 in ActiveStandbyElector and 
NetUtils, so I'll change the issue to fix FileBasedIPList only.

 Fix findbugs warnings in hadoop-common
 --

 Key: HADOOP-11448
 URL: https://issues.apache.org/jira/browse/HADOOP-11448
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Akira AJISAKA

 Now there are 3 findbugs warnings in hadoop-common package.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 {code}
 Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At 
 NetUtils.java:[line 291]
 Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11448) Fix findbugs warnings in FileBasedIPList

2014-12-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11448:
---
Description: 
Now there are 3 findbugs warnings in hadoop-common package.
https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
{code}
Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At NetUtils.java:[line 
291]
{code}
The above two warnings will be fixed by HADOOP-11433. This issue is to fix the 
last one.
{code}
Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
{code}

  was:
Now there are 3 findbugs warnings in hadoop-common package.
https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html

{code}
Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At NetUtils.java:[line 
291]
Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
{code}

   Priority: Minor  (was: Major)
 Labels: newbie  (was: )
Summary: Fix findbugs warnings in FileBasedIPList  (was: Fix findbugs 
warnings in hadoop-common)

 Fix findbugs warnings in FileBasedIPList
 

 Key: HADOOP-11448
 URL: https://issues.apache.org/jira/browse/HADOOP-11448
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Priority: Minor
  Labels: newbie

 Now there are 3 findbugs warnings in hadoop-common package.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 {code}
 Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At 
 NetUtils.java:[line 291]
 {code}
 The above two warnings will be fixed by HADOOP-11433. This issue is to fix 
 the last one.
 {code}
 Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11433) Fix the new findbugs warning from NetUtilsActiveStandbyElector

2014-12-23 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14257966#comment-14257966
 ] 

Akira AJISAKA commented on HADOOP-11433:


Thanks [~xieliang007] for the report and the patch. Mostly looks good to me. 
One suggestion:
{code}
+if (!hasSetZooKeeper.await(zkSessionTimeout, TimeUnit.MILLISECONDS)) {
+  throw new NullPointerException(zk is not set yet!);
+}
{code}
Since there are no null references, would you use IllegalStateException instead 
of NullPointerException?

 Fix the new findbugs warning from NetUtilsActiveStandbyElector
 ---

 Key: HADOOP-11433
 URL: https://issues.apache.org/jira/browse/HADOOP-11433
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha, net
Affects Versions: 2.7.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-11433-001.txt


 see 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5309//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
  :
 {code}
 Correctness Warnings
 Code  Warning
 RVReturn value of java.util.concurrent.CountDownLatch.await(long, 
 TimeUnit) ignored in 
 org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef.process(WatchedEvent)
 Multithreaded correctness Warnings
 Code  Warning
 ATSequence of calls to java.util.concurrent.ConcurrentHashMap may not be 
 atomic in org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION (click for details) 
 In class org.apache.hadoop.net.NetUtils
 In method org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Type java.util.concurrent.ConcurrentHashMap
 Called method java.util.concurrent.ConcurrentHashMap.put(Object, Object)
 At NetUtils.java:[line 291]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11433) Fix the new findbugs warning from NetUtilsActiveStandbyElector

2014-12-23 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14258059#comment-14258059
 ] 

Akira AJISAKA commented on HADOOP-11433:


{quote}
Once hasSetZooKeeper.await(zkSessionTimeout, TimeUnit.MILLISECONDS) is 
false, the zk will be null, so the following statement 
ActiveStandbyElector.this.processWatchEvent(zk, event) will throw a NPE.
I think my fix does not change any semantics, and same with the current impl.
{quote}
Thanks for the comment. I understand the patch does not change any semantics:)
+1 for the patch.

 Fix the new findbugs warning from NetUtilsActiveStandbyElector
 ---

 Key: HADOOP-11433
 URL: https://issues.apache.org/jira/browse/HADOOP-11433
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha, net
Affects Versions: 2.7.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-11433-001.txt


 see 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5309//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
  :
 {code}
 Correctness Warnings
 Code  Warning
 RVReturn value of java.util.concurrent.CountDownLatch.await(long, 
 TimeUnit) ignored in 
 org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef.process(WatchedEvent)
 Multithreaded correctness Warnings
 Code  Warning
 ATSequence of calls to java.util.concurrent.ConcurrentHashMap may not be 
 atomic in org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION (click for details) 
 In class org.apache.hadoop.net.NetUtils
 In method org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Type java.util.concurrent.ConcurrentHashMap
 Called method java.util.concurrent.ConcurrentHashMap.put(Object, Object)
 At NetUtils.java:[line 291]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11433) Fix the new findbugs warning from NetUtilsActiveStandbyElector

2014-12-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11433:
---
Target Version/s: 2.7.0
Hadoop Flags: Reviewed

 Fix the new findbugs warning from NetUtilsActiveStandbyElector
 ---

 Key: HADOOP-11433
 URL: https://issues.apache.org/jira/browse/HADOOP-11433
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha, net
Affects Versions: 2.7.0
Reporter: Liang Xie
Assignee: Liang Xie
Priority: Minor
 Attachments: HADOOP-11433-001.txt


 see 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5309//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
  :
 {code}
 Correctness Warnings
 Code  Warning
 RVReturn value of java.util.concurrent.CountDownLatch.await(long, 
 TimeUnit) ignored in 
 org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef.process(WatchedEvent)
 Multithreaded correctness Warnings
 Code  Warning
 ATSequence of calls to java.util.concurrent.ConcurrentHashMap may not be 
 atomic in org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION (click for details) 
 In class org.apache.hadoop.net.NetUtils
 In method org.apache.hadoop.net.NetUtils.canonicalizeHost(String)
 Type java.util.concurrent.ConcurrentHashMap
 Called method java.util.concurrent.ConcurrentHashMap.put(Object, Object)
 At NetUtils.java:[line 291]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11448) Fix findbugs warnings in FileBasedIPList

2014-12-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11448:
---
Assignee: Tsuyoshi OZAWA

 Fix findbugs warnings in FileBasedIPList
 

 Key: HADOOP-11448
 URL: https://issues.apache.org/jira/browse/HADOOP-11448
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-11448.001.patch


 Now there are 3 findbugs warnings in hadoop-common package.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 {code}
 Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At 
 NetUtils.java:[line 291]
 {code}
 The above two warnings will be fixed by HADOOP-11433. This issue is to fix 
 the last one.
 {code}
 Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11448) Fix findbugs warnings in FileBasedIPList

2014-12-23 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11448:
---
Target Version/s: 2.7.0
Hadoop Flags: Reviewed

LGTM, +1. The findbugs warnings are addressed in HADOOP-11433.

 Fix findbugs warnings in FileBasedIPList
 

 Key: HADOOP-11448
 URL: https://issues.apache.org/jira/browse/HADOOP-11448
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Akira AJISAKA
Assignee: Tsuyoshi OZAWA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-11448.001.patch


 Now there are 3 findbugs warnings in hadoop-common package.
 https://builds.apache.org/job/PreCommit-HADOOP-Build/5336//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
 {code}
 Bug type RV_RETURN_VALUE_IGNORED At ActiveStandbyElector.java:[line 1067]
 Bug type AT_OPERATION_SEQUENCE_ON_CONCURRENT_ABSTRACTION At 
 NetUtils.java:[line 291]
 {code}
 The above two warnings will be fixed by HADOOP-11433. This issue is to fix 
 the last one.
 {code}
 Bug type DLS_DEAD_LOCAL_STORE At FileBasedIPList.java:[line 53]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11457) Correction to BUILDING.txt: Update FindBugs to 2.0.3

2015-01-02 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14262976#comment-14262976
 ] 

Akira AJISAKA commented on HADOOP-11457:


findbugs 3.0.0 is used after HADOOP-10476, so we should fix BUILDING.txt to 
3.0.0 instead of 2.0.3. We should fix Hadoop QA message also.

 Correction to BUILDING.txt: Update FindBugs to 2.0.3
 

 Key: HADOOP-11457
 URL: https://issues.apache.org/jira/browse/HADOOP-11457
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.6.0
Reporter: Rick Kellogg
Priority: Minor
 Attachments: HADOOP-COMMON-11457-1.patch


 Based on Hadoop QA message shown in HADOOP-11402, it appears FindBugs 2.0.3 
 is used.  Latest release is 3.0.0.
 Updating BUILDING.txt to reflect usage of v 2.0.3 instead of 1.3.9.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-02-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304667#comment-14304667
 ] 

Akira AJISAKA commented on HADOOP-10392:


The findbugs warnings in hadoop-rumen are not related to the patch. I'll file a 
jira for fixing the warnings.

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-02-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304686#comment-14304686
 ] 

Akira AJISAKA commented on HADOOP-10392:


Filed MAPREDUCE-6243 for findbugs warnings.

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11544) Remove unused configuration keys for tracing

2015-02-03 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14304712#comment-14304712
 ] 

Akira AJISAKA commented on HADOOP-11544:


The patch looks good to me.
One comment: hadoop.htrace.sampler looks noop, so could you remove the 
parameter from core-site.xml and update the document for tracing also?

 Remove unused configuration keys for tracing
 

 Key: HADOOP-11544
 URL: https://issues.apache.org/jira/browse/HADOOP-11544
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-11544.001.patch


 CommonConfigurationKeys.HADOOP_TRACE_SAMPLER* are no longer used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11468) Remove Findbugs dependency from mvn package -Pdocs command

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-11468:
--

Assignee: Akira AJISAKA

 Remove Findbugs dependency from mvn package -Pdocs command
 --

 Key: HADOOP-11468
 URL: https://issues.apache.org/jira/browse/HADOOP-11468
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA

 mvn package -Pdist,docs,src -DskipTests -Dtar fails without installing 
 Findbugs.
 {code}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
 hadoop-common: An Ant BuildException has occured: stylesheet 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/${env.FINDBUGS_HOME}/src/xsl/default.xsl
  doesn't exist.
 [ERROR] around Ant part ...xslt 
 style=${env.FINDBUGS_HOME}/src/xsl/default.xsl 
 in=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/findbugsXml.xml
  
 out=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/site/findbugs.html/...
  @ 44:245 in 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
 {code}
 Maven now automatically downloads Findbugs, so it's better to remove the 
 dependency for users to build it without installing Findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306005#comment-14306005
 ] 

Akira AJISAKA commented on HADOOP-10392:


The test failures look unrelated to the patch. Resubmitting.

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
 HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10392:
---
Attachment: HADOOP-10392.7.patch

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
 HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11468) Remove Findbugs dependency from mvn package -Pdocs command

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11468:
---
  Labels: build  (was: )
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

 Remove Findbugs dependency from mvn package -Pdocs command
 --

 Key: HADOOP-11468
 URL: https://issues.apache.org/jira/browse/HADOOP-11468
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
  Labels: build
 Attachments: HADOOP-11468-001.patch


 mvn package -Pdist,docs,src -DskipTests -Dtar fails without installing 
 Findbugs.
 {code}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
 hadoop-common: An Ant BuildException has occured: stylesheet 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/${env.FINDBUGS_HOME}/src/xsl/default.xsl
  doesn't exist.
 [ERROR] around Ant part ...xslt 
 style=${env.FINDBUGS_HOME}/src/xsl/default.xsl 
 in=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/findbugsXml.xml
  
 out=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/site/findbugs.html/...
  @ 44:245 in 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
 {code}
 Maven now automatically downloads Findbugs, so it's better to remove the 
 dependency for users to build it without installing Findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11468) Remove Findbugs dependency from mvn package -Pdocs command

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11468:
---
Attachment: HADOOP-11468-001.patch

Attaching a patch.
* Copied findbugs.xsl from findbugs and fixed indents.
* Added a parameter {{hadoop.project.dir}} to pom.xml to specify the xsl file.
I verified mvn package -Pdist,docs -Dtar -DskipTests worked without 
installing findbugs.

 Remove Findbugs dependency from mvn package -Pdocs command
 --

 Key: HADOOP-11468
 URL: https://issues.apache.org/jira/browse/HADOOP-11468
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.6.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
 Attachments: HADOOP-11468-001.patch


 mvn package -Pdist,docs,src -DskipTests -Dtar fails without installing 
 Findbugs.
 {code}
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-antrun-plugin:1.7:run (site) on project 
 hadoop-common: An Ant BuildException has occured: stylesheet 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/${env.FINDBUGS_HOME}/src/xsl/default.xsl
  doesn't exist.
 [ERROR] around Ant part ...xslt 
 style=${env.FINDBUGS_HOME}/src/xsl/default.xsl 
 in=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/findbugsXml.xml
  
 out=/Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/site/findbugs.html/...
  @ 44:245 in 
 /Users/aajisaka/git/hadoop/hadoop-common-project/hadoop-common/target/antrun/build-main.xml
 {code}
 Maven now automatically downloads Findbugs, so it's better to remove the 
 dependency for users to build it without installing Findbugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11549) flaky test detection tool failed to handle special control characters in test result

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306100#comment-14306100
 ] 

Akira AJISAKA commented on HADOOP-11549:


Committed this to trunk and branch-2. Thanks [~yzhangal] again!

 flaky test detection tool failed to handle special control characters in test 
 result
 

 Key: HADOOP-11549
 URL: https://issues.apache.org/jira/browse/HADOOP-11549
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11549.001.patch


 When running the tool from HADOOP-11045 on latest Hadoop-hdfs-trunk job, I'm 
 seeing a problem here:
 {code}
 [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
 Hadoop-hdfs-trunk 
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-hdfs-trunk
 THERE ARE 5 builds (out of 6) that have failed tests in the past 14 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/testReport 
 (2015-02-04 03:30:00)
 Could not open testReport, check 
 https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/Console for why it was 
 reported failed
 ...
 {code}
 I saw that the testReport can actually be opened from browser. After looking, 
 I found that HDFS-7287 fix added the following test code:
 {code}
 //Create a directory whose name should be escaped in XML
 Path invalidXMLDir = new Path(/dirContainingInvalidXMLChar\uhere);
 hdfs.mkdirs(invalidXMLDir);
 ...
 {code}
 And the output from this code caused the tool to choke. 
 I found a solution here and I'm attaching a patch.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11549) flaky test detection tool failed to handle special control characters in test result

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306026#comment-14306026
 ] 

Akira AJISAKA commented on HADOOP-11549:


Thanks [~yzhangal] for the report and the patch. +1.
* Verified the patch worked well with Python 2.7.6.
* After git apply command (not patch command), the permission is changed to 755.

 flaky test detection tool failed to handle special control characters in test 
 result
 

 Key: HADOOP-11549
 URL: https://issues.apache.org/jira/browse/HADOOP-11549
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11549.001.patch


 When running the tool from HADOOP-11045 on latest Hadoop-hdfs-trunk job, I'm 
 seeing a problem here:
 {code}
 [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
 Hadoop-hdfs-trunk 
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-hdfs-trunk
 THERE ARE 5 builds (out of 6) that have failed tests in the past 14 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/testReport 
 (2015-02-04 03:30:00)
 Could not open testReport, check 
 https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/Console for why it was 
 reported failed
 ...
 {code}
 I saw that the testReport can actually be opened from browser. After looking, 
 I found that HDFS-7287 fix added the following test code:
 {code}
 //Create a directory whose name should be escaped in XML
 Path invalidXMLDir = new Path(/dirContainingInvalidXMLChar\uhere);
 hdfs.mkdirs(invalidXMLDir);
 ...
 {code}
 And the output from this code caused the tool to choke. 
 I found a solution here and I'm attaching a patch.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11549) flaky test detection tool failed to handle special control characters in test result

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11549:
---
  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 flaky test detection tool failed to handle special control characters in test 
 result
 

 Key: HADOOP-11549
 URL: https://issues.apache.org/jira/browse/HADOOP-11549
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.7.0
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Fix For: 2.7.0

 Attachments: HADOOP-11549.001.patch


 When running the tool from HADOOP-11045 on latest Hadoop-hdfs-trunk job, I'm 
 seeing a problem here:
 {code}
 [yzhang@localhost jenkinsftf]$ ./determine-flaky-tests-hadoop.py -j 
 Hadoop-hdfs-trunk 
 Recently FAILED builds in url: 
 https://builds.apache.org/job/Hadoop-hdfs-trunk
 THERE ARE 5 builds (out of 6) that have failed tests in the past 14 days, 
 as listed below:
 ===https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/testReport 
 (2015-02-04 03:30:00)
 Could not open testReport, check 
 https://builds.apache.org/job/Hadoop-hdfs-trunk/2026/Console for why it was 
 reported failed
 ...
 {code}
 I saw that the testReport can actually be opened from browser. After looking, 
 I found that HDFS-7287 fix added the following test code:
 {code}
 //Create a directory whose name should be escaped in XML
 Path invalidXMLDir = new Path(/dirContainingInvalidXMLChar\uhere);
 hdfs.mkdirs(invalidXMLDir);
 ...
 {code}
 And the output from this code caused the tool to choke. 
 I found a solution here and I'm attaching a patch.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9340) Add Last-Access Time Format to FsShell Stat Command

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306305#comment-14306305
 ] 

Akira AJISAKA commented on HADOOP-9340:
---

Thanks [~njw45] for the patch! Here are two comments:
# The patch needs rebasing.
# Would you update the document (FileSystemShell.apt.vm) also?


 Add Last-Access Time Format to FsShell Stat Command
 ---

 Key: HADOOP-9340
 URL: https://issues.apache.org/jira/browse/HADOOP-9340
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha
Reporter: Nick White
Assignee: Nick White
 Attachments: HDFS-4537.1.patch, HDFS-4537.patch


 The stat command should have a format specifier to display the last access 
 time of a file. I've chosen %a / %A in the attached patch as that's what the 
 unix 'find' command uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9340) Add Last-Access Time Format to FsShell Stat Command

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9340:
--
Status: Open  (was: Patch Available)

 Add Last-Access Time Format to FsShell Stat Command
 ---

 Key: HADOOP-9340
 URL: https://issues.apache.org/jira/browse/HADOOP-9340
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha
Reporter: Nick White
Assignee: Nick White
 Attachments: HDFS-4537.1.patch, HDFS-4537.patch


 The stat command should have a format specifier to display the last access 
 time of a file. I've chosen %a / %A in the attached patch as that's what the 
 unix 'find' command uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9340) Add Last-Access Time Format to FsShell Stat Command

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9340:
--
Target Version/s: 2.7.0  (was: 2.1.0-beta)

 Add Last-Access Time Format to FsShell Stat Command
 ---

 Key: HADOOP-9340
 URL: https://issues.apache.org/jira/browse/HADOOP-9340
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 2.0.3-alpha
Reporter: Nick White
Assignee: Nick White
 Attachments: HDFS-4537.1.patch, HDFS-4537.patch


 The stat command should have a format specifier to display the last access 
 time of a file. I've chosen %a / %A in the attached patch as that's what the 
 unix 'find' command uses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8834) Hadoop examples when run without an argument, gives ERROR instead of just usage info

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306327#comment-14306327
 ] 

Akira AJISAKA commented on HADOOP-8834:
---

I'm thinking we should suppress error message only when no argument are set. 
The following is fine for me.
{code}
if (otherArgs.size() != 2) {
  if (!otherArgs.isEmpty()) {
 // print error message
  }
  return printUsage();
}
{code}

 Hadoop examples when run without an argument, gives ERROR instead of just 
 usage info
 

 Key: HADOOP-8834
 URL: https://issues.apache.org/jira/browse/HADOOP-8834
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, trunk-win
Reporter: Robert Justice
Assignee: Abhishek Kapoor
Priority: Minor
 Attachments: HADOOP-8834.patch, HADOOP-8834.patch


 Hadoop sort example should not give an ERROR and only should display usage 
 when run with no parameters. 
 {code}
 $ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort
 ERROR: Wrong number of parameters: 0 instead of 2.
 sort [-m maps] [-r reduces] [-inFormat input format class] [-outFormat 
 output format class] [-outKey output key class] [-outValue output value 
 class] [-totalOrder pcnt num samples max splits] input output
 Generic options supported are
 -conf configuration file specify an application configuration file
 -D property=valueuse value for given property
 -fs local|namenode:port  specify a namenode
 -jt local|jobtracker:portspecify a job tracker
 -files comma separated list of filesspecify comma separated files to be 
 copied to the map reduce cluster
 -libjars comma separated list of jarsspecify comma separated jar files 
 to include in the classpath.
 -archives comma separated list of archivesspecify comma separated 
 archives to be unarchived on the compute machines.
 The general command line syntax is
 bin/hadoop command [genericOptions] [commandOptions]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9285) findbugs 2 - bad practice warnings fix.

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9285:
--
Resolution: Duplicate
Status: Resolved  (was: Patch Available)

 findbugs 2 - bad practice warnings fix.
 ---

 Key: HADOOP-9285
 URL: https://issues.apache.org/jira/browse/HADOOP-9285
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Surenkumar Nihalani
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9285.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9285) findbugs 2 - bad practice warnings fix.

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306393#comment-14306393
 ] 

Akira AJISAKA commented on HADOOP-9285:
---

Thanks [~snihalani] for the patch. Unfortunately, this issue was duplicated by 
HADOOP-10482 and HADOOP-11386, and fixed by them. Now there are no findbugs 
warnings in hadoop-common module. Closing.

 findbugs 2 - bad practice warnings fix.
 ---

 Key: HADOOP-9285
 URL: https://issues.apache.org/jira/browse/HADOOP-9285
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Surenkumar Nihalani
Assignee: Surenkumar Nihalani
 Attachments: HADOOP-9285.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7846) download pages for mapred/hdfs don't match hadoop-common

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7846?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-7846:
--
Resolution: Invalid
Status: Resolved  (was: Patch Available)

 download pages for mapred/hdfs don't match hadoop-common
 

 Key: HADOOP-7846
 URL: https://issues.apache.org/jira/browse/HADOOP-7846
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Joe Crobak
Assignee: Joe Crobak
Priority: Minor
 Attachments: HADOOP-7846.patch


 http://hadoop.apache.org/hdfs/releases.html
 http://hadoop.apache.org/mapreduce/releases.html 
 don't match
 http://hadoop.apache.org/common/releases.html
 the common/releases.html page has a nice description of the current status 
 (stable, alpha, beta) of each release, and without it users get confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7846) download pages for mapred/hdfs don't match hadoop-common

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7846?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14306352#comment-14306352
 ] 

Akira AJISAKA commented on HADOOP-7846:
---

All of the three URLs listed in the description are directed to 
http://hadoop.apache.org/releases.html , so there is no inconsistency. Closing.

 download pages for mapred/hdfs don't match hadoop-common
 

 Key: HADOOP-7846
 URL: https://issues.apache.org/jira/browse/HADOOP-7846
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Joe Crobak
Assignee: Joe Crobak
Priority: Minor
 Attachments: HADOOP-7846.patch


 http://hadoop.apache.org/hdfs/releases.html
 http://hadoop.apache.org/mapreduce/releases.html 
 don't match
 http://hadoop.apache.org/common/releases.html
 the common/releases.html page has a nice description of the current status 
 (stable, alpha, beta) of each release, and without it users get confused.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-8594) Fix issues identified by findsbugs2

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-8594.
---
Resolution: Fixed

Closing this issue as all sub-tasks has been closed.
Now hadoop use findbugs 3.0.0 and we are fixing new warnings in HADOOP-10477.

 Fix issues identified by findsbugs2
 ---

 Key: HADOOP-8594
 URL: https://issues.apache.org/jira/browse/HADOOP-8594
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
 Attachments: HADOOP-8594.patch, findbugs-2-html-reports.tar.gz, 
 findbugs.html, findbugs.out.17.html, findbugs.out.18.html, 
 findbugs.out.19.html, findbugs.out.20.html, findbugs.out.21.html, 
 findbugs.out.22.html, findbugs.out.23.html, findbugs.out.27.html, 
 findbugs.out.28.html, findbugs.out.29.html


 Harsh recently ran findbugs 2 (instead of 1.3.9 which is what jenkins runs) 
 and it showed thousands of warnings (they've made a lot of progress in 
 findbugs releases). We should upgrade to findbugs 2 and fix these. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9087:
--
Attachment: HADOOP-9087-002.patch

Attaching v2 patch
* Rebased for the latest trunk.
* Reduced nested try-catch block.
* Updated the document for Metrics.

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-9087-002.patch, HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-9087:
--
Target Version/s: 2.7.0
  Status: Patch Available  (was: Open)

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-9087-002.patch, HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310036#comment-14310036
 ] 

Akira AJISAKA commented on HADOOP-9087:
---

Thank you!

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Priority: Minor
 Attachments: HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA reassigned HADOOP-9087:
-

Assignee: Akira AJISAKA

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310037#comment-14310037
 ] 

Akira AJISAKA commented on HADOOP-9087:
---

Thank you!

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Priority: Minor
 Attachments: HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11551) Let nightly jenkins jobs run the tool of HADOOP-11045 and include the result in the job report

2015-02-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11551?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310389#comment-14310389
 ] 

Akira AJISAKA commented on HADOOP-11551:


I'm also +1 for this change. I'm thinking it's ideal for me to adding the 
result of the tool to the last part of the output. The output, which are sent 
to dev@ ML, is now as follows:
# last 60 lines of the console
# failed tests (if any)

It would be nice if the output becomes
# last 60 lines of the console
# failed test (if any)
# flaky tests (if any)

Probably we need to ask Jenkins admin to change the behavior.

 Let nightly jenkins jobs run the tool of HADOOP-11045 and include the result 
 in the job report
 --

 Key: HADOOP-11551
 URL: https://issues.apache.org/jira/browse/HADOOP-11551
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, tools
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang

 This jira is to propose running the tool created with HADOOP-11045 at the end 
 of jenkins test job - I am thinking about trunk jobs currently - and report 
 the results at the job report. This way when we look at test failure, we can 
 tell the failure pattern, and whether failed test is likely a flaky test or 
 not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9087) Queue size metric for metric sinks isn't actually maintained

2015-02-06 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14310380#comment-14310380
 ] 

Akira AJISAKA commented on HADOOP-9087:
---

The test failure is not related to the patch. I couldn't reproduce the error 
locally.

 Queue size metric for metric sinks isn't actually maintained
 

 Key: HADOOP-9087
 URL: https://issues.apache.org/jira/browse/HADOOP-9087
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.6.0
Reporter: Mostafa Elhemali
Assignee: Akira AJISAKA
Priority: Minor
 Attachments: HADOOP-9087-002.patch, HADOOP-9087.patch


 Came across this while looking at the code for the metrics system: the qsize 
 gauge in the MetricsSinkAdapter is not updated/maintained so it will always 
 be zero (there's a compiler warning about it actually).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11544) Remove unused configuration keys for tracing

2015-02-04 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-11544:
---
Affects Version/s: 2.7.0

 Remove unused configuration keys for tracing
 

 Key: HADOOP-11544
 URL: https://issues.apache.org/jira/browse/HADOOP-11544
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.7.0
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HADOOP-11544.001.patch, HADOOP-11544.002.patch


 CommonConfigurationKeys.HADOOP_TRACE_SAMPLER* are no longer used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11544) Remove unused configuration keys for tracing

2015-02-04 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11544?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14305009#comment-14305009
 ] 

Akira AJISAKA commented on HADOOP-11544:


Thanks [~iwasakims] for updating the patch. Make sense, +1.

 Remove unused configuration keys for tracing
 

 Key: HADOOP-11544
 URL: https://issues.apache.org/jira/browse/HADOOP-11544
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Trivial
 Attachments: HADOOP-11544.001.patch, HADOOP-11544.002.patch


 CommonConfigurationKeys.HADOOP_TRACE_SAMPLER* are no longer used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


<    1   2   3   4   5   6   7   8   9   10   >