[jira] [Created] (HDFS-7751) Fix TestHDFSCLI

2015-02-06 Thread Xiaoyu Yao (JIRA)
Xiaoyu Yao created HDFS-7751:


 Summary: Fix TestHDFSCLI
 Key: HDFS-7751
 URL: https://issues.apache.org/jira/browse/HDFS-7751
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Xiaoyu Yao
Assignee: Xiaoyu Yao


The patch V4 in HDFS-7720 has the fix but is missed from the commit. This JIRA 
is opened to fix TestHDFSCLI for the trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-7750) Fix findbugs warnings in hdfs-bkjournal module

2015-02-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-7750.
-
Resolution: Duplicate

> Fix findbugs warnings in hdfs-bkjournal module
> --
>
> Key: HDFS-7750
> URL: https://issues.apache.org/jira/browse/HDFS-7750
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 2.7.0
>Reporter: Akira AJISAKA
>Assignee: Rakesh R
>  Labels: newbie
>
> There are 3 findbugs warnings in hdfs-bkjournal module. We should fix them.
> {code}
> Found reliance on default encoding: String.getBytes()
> At BookKeeperJournalManager.java:[line 386]
> Found reliance on default encoding: String.getBytes()
> At BookKeeperJournalManager.java:[line 524]
> Found reliance on default encoding: String.getBytes()
> At BookKeeperJournalManager.java:[line 733]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7750) Fix findbugs warnings in hdfs-bkjournal module

2015-02-06 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-7750:
---

 Summary: Fix findbugs warnings in hdfs-bkjournal module
 Key: HDFS-7750
 URL: https://issues.apache.org/jira/browse/HDFS-7750
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Akira AJISAKA


There are 3 findbugs warnings in hdfs-bkjournal module. We should fix them.
{code}
Found reliance on default encoding: String.getBytes()
At BookKeeperJournalManager.java:[line 386]
Found reliance on default encoding: String.getBytes()
At BookKeeperJournalManager.java:[line 524]
Found reliance on default encoding: String.getBytes()
At BookKeeperJournalManager.java:[line 733]
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-2628) Remove Mapred filenames from HDFS findbugsExcludeFile.xml file

2015-02-06 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-2628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HDFS-2628.
-
Resolution: Duplicate

This issue was fixed by HDFS-6025. Closing.

> Remove Mapred filenames from HDFS findbugsExcludeFile.xml file
> --
>
> Key: HDFS-2628
> URL: https://issues.apache.org/jira/browse/HDFS-2628
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: test
>Reporter: Uma Maheswara Rao G
>Priority: Minor
>
> Mapreduce filesnames are there in 
> hadoop-hdfs-project\hadoop-hdfs\dev-support\findbugsExcludeFile.xml
> is it intentional? i think we should remove them from HDFS.
> Exampl:
> {code}
>   
>  
>
>
>  
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7749) Add stripped block support in INodeFile

2015-02-06 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-7749:
---

 Summary: Add stripped block support in INodeFile
 Key: HDFS-7749
 URL: https://issues.apache.org/jira/browse/HDFS-7749
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Jing Zhao
Assignee: Jing Zhao


This jira plan to add a new INodeFile feature to store the stripped blocks 
information in case that the INodeFile is erasure coded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7748) Separate ECN flags from the Status in the DataTransferPipelineAck

2015-02-06 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-7748:


 Summary: Separate ECN flags from the Status in the 
DataTransferPipelineAck
 Key: HDFS-7748
 URL: https://issues.apache.org/jira/browse/HDFS-7748
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Haohui Mai


Prior to the discussions on HDFS-7270, the old clients might fail to talk to 
the newer server when ECN is turned on. This jira proposes to separate the ECN 
flags in a separate protobuf field to make the ack compatible on both versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7747) Add a truncate test with cached data

2015-02-06 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7747:
-

 Summary: Add a truncate test with cached data 
 Key: HDFS-7747
 URL: https://issues.apache.org/jira/browse/HDFS-7747
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Priority: Minor


Let's add a truncate test with cached data to verify that a new client won't 
read beyond truncated length from the cached data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7746) Add a test randomly mixing append, truncate and snapshot

2015-02-06 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-7746:
-

 Summary: Add a test randomly mixing append, truncate and snapshot
 Key: HDFS-7746
 URL: https://issues.apache.org/jira/browse/HDFS-7746
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze
Priority: Minor


TestFileTruncate.testSnapshotWithAppendTruncate already does a good job for 
covering many test cases.  Let's add a random test for mixing many append, 
truncate and snapshot operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7745) HDFS should have its own daemon command and not rely on the one in common

2015-02-06 Thread Sanjay Radia (JIRA)
Sanjay Radia created HDFS-7745:
--

 Summary: HDFS should have its own daemon command  and not rely on 
the one in common
 Key: HDFS-7745
 URL: https://issues.apache.org/jira/browse/HDFS-7745
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Sanjay Radia


HDFS should have its own daemon command and not rely on the one in common.  BTW 
Yarn split out its own daemon command during project split. Note the 
hdfs-command does have --daemon flag and hence the daemon script is merely a 
wrapper. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7744) Fix potential NPE in DFSInputStream after setDropBehind or setReadahead is called

2015-02-06 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HDFS-7744:
--

 Summary: Fix potential NPE in DFSInputStream after setDropBehind 
or setReadahead is called
 Key: HDFS-7744
 URL: https://issues.apache.org/jira/browse/HDFS-7744
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: dfsclient
Affects Versions: 2.6.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Fix a potential NPE in DFSInputStream after setDropBehind or setReadahead is 
called.  These functions clear the {{blockReader}}, but don't set {{blockEnd}} 
to -1, which could lead to {{DFSInputStream#seek}} attempting to derference 
{{blockReader}} even though it is {{null}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7743) Code cleanup of BlockInfo and rename BlockInfo to BlockReplicationInfo

2015-02-06 Thread Jing Zhao (JIRA)
Jing Zhao created HDFS-7743:
---

 Summary: Code cleanup of BlockInfo and rename BlockInfo to 
BlockReplicationInfo
 Key: HDFS-7743
 URL: https://issues.apache.org/jira/browse/HDFS-7743
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: namenode
Reporter: Jing Zhao
Assignee: Jing Zhao
Priority: Minor


In the work of erasure coding (HDFS-7285), we plan to extend the class 
BlockInfo to two subclasses: BlockReplicationInfo and BlockGroupInfo 
(HDFS-7716). To ease the HDFS-EC branch syncing with trunk, this jira plans to 
rename the current BlockInfo to BlockReplicationInfo in trunk.

In the meanwhile, we can also use this chance to do some minor code cleanup. 
E.g., removing unnecessary overrided {{hashCode}} and {{equals}} methods since 
they are just the same with the super class {{Block}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-7742) favoring decommissioning node for replication can cause a block to stay underreplicated for long periods

2015-02-06 Thread Nathan Roberts (JIRA)
Nathan Roberts created HDFS-7742:


 Summary: favoring decommissioning node for replication can cause a 
block to stay underreplicated for long periods
 Key: HDFS-7742
 URL: https://issues.apache.org/jira/browse/HDFS-7742
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.6.0
Reporter: Nathan Roberts
Assignee: Nathan Roberts


When choosing a source node to replicate a block from, a decommissioning node 
is favored. The reason for the favoritism is that decommissioning nodes aren't 
servicing any writes so in-theory they are less loaded.

However, the same selection algorithm also tries to make sure it doesn't get 
"stuck" on any particular node:
{noformat}
  // switch to a different node randomly
  // this to prevent from deterministically selecting the same node even
  // if the node failed to replicate the block on previous iterations
{noformat}
Unfortunately, the decommissioning check is prior to this randomness so the 
algorithm can get stuck trying to replicate from a decommissioning node. We've 
seen this in practice where a decommissioning datanode was failing to replicate 
a block for many days, when other viable replicas of the block were available.

Given that we limit the number of streams we'll assign to a given node (default 
soft limit of 2, hard limit of 4), It doesn't seem like favoring a 
decommissioning node has significant benefit. i.e. when there is significant 
replication work to do, we'll quickly hit the stream limit of the 
decommissioning nodes and use other nodes in the cluster anyway; when there 
isn't significant replication work then in theory we've got plenty of 
replication bandwidth available so choosing a decommissioning node isn't much 
of a win.

I see two choices:
1) Change the algorithm to still favor decommissioning nodes but with some 
level of randomness that will avoid always selecting the decommissioning node
2) Remove the favoritism for decommissioning nodes

I prefer #2. It simplifies the algorithm, and given the other throttles we have 
in place, I'm not sure there is a significant benefit to selecting 
decommissioning nodes. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 93 - Failure

2015-02-06 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/93/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6940 lines...]
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.12.1:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS  FAILURE [  02:39 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  1.917 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:39 h
[INFO] Finished at: 2015-02-06T14:13:34+00:00
[INFO] Final Memory: 51M/251M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
Build step 'Execute shell' marked build as failure
Archiving artifacts
Recording test results
Updating HADOOP-6964
Updating MAPREDUCE-6233
Updating HADOOP-11543
Updating YARN-1582
Updating YARN-1537
Updating HADOOP-7713
Updating YARN-3101
Updating HDFS-7741
Updating HADOOP-9044
Updating YARN-3145
Updating YARN-1904
Updating HDFS-7270
Updating MAPREDUCE-6186
Updating YARN-3149
Updating HADOOP-11463
Updating HDFS-7655
Updating HADOOP-11506
Updating HADOOP-11526
Updating HDFS-7698
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly

Error Message:
Expected first block report to make some blocks safe.

Stack Trace:
java.lang.AssertionError: Expected first block report to make some blocks safe.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
org.apache.hadoop.hdfs.TestSafeMode.testInitializeReplQueuesEarly(TestSafeMode.java:222)




Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #93

2015-02-06 Thread Apache Jenkins Server
See 

Changes:

[stevel] HADOOP-11463 Replace method-local TransferManager object with 
S3AFileSystem#transfers. (Ted Yu via stevel)

[aw] HADOOP-7713. dfs -count -q should label output column (Jonathan Allen via 
aw)

[jlowe] Update CHANGES.txt to move MR-6059 to 2.7

[sandy] YARN-3101. In Fair Scheduler, fix canceling of reservations for 
exceeding max share (Anubhav Dhoot via Sandy Ryza)

[aw] HADOOP-9044. add FindClass main class to provide classpath checking of 
installations (Steve Loughran via aw)

[wheat9] HDFS-7270. Add congestion signaling capability to DataNode write 
protocol. Contributed by Haohui Mai.

[jlowe] YARN-1582. Capacity Scheduler: add a maximum-allocation-mb setting per 
queue. Contributed by Thomas Graves

[aw] HADOOP-9044. add FindClass main class to provide classpath checking of 
installations (Steve Loughran via aw)

[jlowe] MAPREDUCE-6186. Redundant call to requireJob() while displaying the 
conf page. Contributed by Rohit Agarwal

[rkanter] MAPREDUCE-6233. 
org.apache.hadoop.mapreduce.TestLargeSort.testLargeSort failed in trunk (zxu 
via rkanter)

[xgong] YARN-3149. Fix typo in message for invalid application id. Contributed

[jianhe] YARN-3145. Fixed ConcurrentModificationException on CapacityScheduler 
ParentQueue#getQueueUserAclInfo. Contributed by Tsuyoshi OZAWA

[yliu] HDFS-7655. Expose truncate API for Web HDFS. (yliu)

[yliu] HDFS-7698. Fix locking on HDFS read statistics and add a method for 
clearing them. (Colin P. McCabe via yliu)

[cnauroth] HADOOP-11526. Memory leak in Bzip2Compressor and Bzip2Decompressor. 
Contributed by Anu Engineer.

[aw] HADOOP-6964. Allow compact property description in xml (Kengo Seki via aw)

[ozawa] Move HADOOP-11543 from BUG-FIX to IMPROVEMENT in CHANGES.txt.

[yliu] HDFS-7741. Remove unnecessary synchronized in FSDataInputStream and 
HdfsDataInputStream. (yliu)

[acmurthy] YARN-1904. Ensure exceptions thrown in ClientRMService & 
ApplicationHistoryClientService are uniform when application-attempt is not 
found. Contributed by Zhijie Shen.

[acmurthy] YARN-1537. Fix race condition in 
TestLocalResourcesTrackerImpl.testLocalResourceCache. Contributed by Xuan Gong.

[gera] HADOOP-11506. Configuration variable expansion regex expensive for long 
values. (Gera Shegalov via gera)

--
[...truncated 6747 lines...]
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 30, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.246 sec - in 
org.apache.hadoop.hdfs.TestDFSUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestGetBlocks
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 49.57 sec - in 
org.apache.hadoop.hdfs.TestGetBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestMultiThreadedHflush
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.402 sec - in 
org.apache.hadoop.hdfs.TestMultiThreadedHflush
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestCyclicIteration
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.069 sec - in 
org.apache.hadoop.hdfs.util.TestCyclicIteration
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.212 sec - in 
org.apache.hadoop.hdfs.util.TestBestEffortLongFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestDiff
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.307 sec - in 
org.apache.hadoop.hdfs.util.TestDiff
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestByteArrayManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.351 sec - in 
org.apache.hadoop.hdfs.util.TestByteArrayManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestXMLUtils
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.076 sec - in 
org.apache.hadoop.hdfs.util.TestXMLUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec - in 
org.apache.hadoop.hdfs.util.TestLightWeightHashSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.util.TestMD5F