[jira] [Resolved] (HDFS-8819) Erasure Coding: add test for namenode process over replicated striped block

2015-07-23 Thread Takuya Fukudome (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Takuya Fukudome resolved HDFS-8819.
---
Resolution: Invalid

> Erasure Coding: add test for namenode process over replicated striped block
> ---
>
> Key: HDFS-8819
> URL: https://issues.apache.org/jira/browse/HDFS-8819
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Takuya Fukudome
>Assignee: Takuya Fukudome
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8819) Erasure Coding: add test for namenode process over replicated striped block

2015-07-23 Thread Takuya Fukudome (JIRA)
Takuya Fukudome created HDFS-8819:
-

 Summary: Erasure Coding: add test for namenode process over 
replicated striped block
 Key: HDFS-8819
 URL: https://issues.apache.org/jira/browse/HDFS-8819
 Project: Hadoop HDFS
  Issue Type: Sub-task
  Components: test
Reporter: Takuya Fukudome
Assignee: Takuya Fukudome






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8818) Allow Balancer to run faster

2015-07-23 Thread Tsz Wo Nicholas Sze (JIRA)
Tsz Wo Nicholas Sze created HDFS-8818:
-

 Summary: Allow Balancer to run faster
 Key: HDFS-8818
 URL: https://issues.apache.org/jira/browse/HDFS-8818
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: balancer & mover
Reporter: Tsz Wo Nicholas Sze
Assignee: Tsz Wo Nicholas Sze


The original design of Balancer is intentionally to make it run slowly so that 
the balancing activities won't affect the normal cluster activities and the 
running jobs.

There are new use case that cluster admin may choose to balance the cluster 
when the cluster load is low, or in a maintain window.  So that we should have 
an option to allow Balancer to run faster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8817) Make StorageType for Volumes in DataNode visible through JMX

2015-07-23 Thread Anu Engineer (JIRA)
Anu Engineer created HDFS-8817:
--

 Summary: Make StorageType for Volumes in DataNode visible through 
JMX
 Key: HDFS-8817
 URL: https://issues.apache.org/jira/browse/HDFS-8817
 Project: Hadoop HDFS
  Issue Type: Improvement
  Components: HDFS
Affects Versions: 2.8.0
Reporter: Anu Engineer
Assignee: Anu Engineer
 Fix For: 2.8.0


StorageTypes are part of Volumes on DataNodes. Right now VolumeInfo does not 
contain the StorageType Info in the {{VolumeInfo}}.  This JIRA proposes to 
expose that info through VolumeInfo JSON.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8816) Improve visualization for the Datanode tab in the NN UI

2015-07-23 Thread Haohui Mai (JIRA)
Haohui Mai created HDFS-8816:


 Summary: Improve visualization for the Datanode tab in the NN UI
 Key: HDFS-8816
 URL: https://issues.apache.org/jira/browse/HDFS-8816
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Haohui Mai


The information of the datanode tab in the NN UI is clogged. This jira proposes 
to improve the visualization of the datanode tab in the UI.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8815) DFS getStoragePolicy implementation using single RPC call

2015-07-23 Thread Arpit Agarwal (JIRA)
Arpit Agarwal created HDFS-8815:
---

 Summary: DFS getStoragePolicy implementation using single RPC call
 Key: HDFS-8815
 URL: https://issues.apache.org/jira/browse/HDFS-8815
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: hdfs-client
Affects Versions: 2.8.0
Reporter: Arpit Agarwal


HADOOP-12161 introduced a new {{FileSystem#getStoragePolicy}} call. The DFS 
implementation of the call requires two RPC calls, the first to fetch the 
storage policy ID and the second to fetch the policy suite to map the policy ID 
to a {{BlockStoragePolicySpi}}.

Fix the implementation to require a single RPC call.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk-Java8 - Build # 254 - Still Failing

2015-07-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/254/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 8022 lines...]
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:00 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:46 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.050 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:49 h
[INFO] Finished at: 2015-07-23T14:35:48+00:00
[INFO] Final Memory: 53M/640M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4291343 bytes
Compression is 0.0%
Took 17 sec
Recording test results
Updating YARN-2019
Updating YARN-3954
Updating YARN-3932
Updating HDFS-8797
Updating HADOOP-12184
Updating HADOOP-12239
Updating YARN-3956
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
7 tests failed.
REGRESSION:  org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testReadTimeout

Error Message:
expected: but was:

Stack Trace:
java.lang.AssertionError: expected: but was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.testReadTimeout(TestWebHdfsTimeouts.java:131)


FAILED:  
org.apache.hadoop.hdfs.TestLeaseRecovery2.testHardLeaseRecoveryWithRenameAfterNameNodeRestart

Error Message:
org.apache.hadoop.util.ExitUtil$ExitException: Could not sync enough journals 
to persistent storage due to No journals available to flush. Unsynced 
transactions: 1
 at org.apache.hadoop.util.ExitUtil.terminate(ExitUtil.java:126)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:637)
 at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.endCurrentLogSegment(FSEditLog.java:1306)
 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.close(FSEditLog.java:362)
 at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.stopActiveServices(FSNamesystem.java:1212)
 at 
org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.stopActiveServices(NameNode.java:1717)
 at 
org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.exitState(ActiveState.java:70)
 at org.apache.hadoop.hdfs.server.namenode.NameNode.stop(NameNode.java:863)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.shutdownNameNode(MiniDFSCluster.java:1906)
 at 
org.apache.hadoop.hdfs.MiniDFSCluster.restartNameNode(MiniDFSCluster.java:1957)
 at 
org.apache.hadoop.h

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #254

2015-07-23 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12239. StorageException complaining " no lease ID" when 
updating FolderLastModifiedTime in WASB. Contributed by Duo Xu.

[cmccabe] HADOOP-12184. Remove unused Linux-specific constants in NativeIO 
(Martin Walsh via Colin P. McCabe)

[wangda] YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and 
UserInfo should based on total-used-resources. (Bibin A Chundatt via wangda)

[rohithsharmaks] YARN-3954. Fix 
TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml. (varun 
saxena via rohithsharmaks)

[wangda] YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun 
Vasudev via wangda)

[jing9] HDFS-8797. WebHdfsFileSystem creates too many connections for pread. 
Contributed by Jing Zhao.

[junping_du] YARN-2019. Retrospect on decision of making RM crashed if any 
exception throw in ZKRMStateStore. Contributed by Jian He.

--
[...truncated 7829 lines...]
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.583 sec - in 
org.apache.hadoop.hdfs.TestListFilesInDFS
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.897 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.257 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdminMiniCluster
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetConf
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.934 sec - in 
org.apache.hadoop.hdfs.tools.TestGetConf
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.531 sec - in 
org.apache.hadoop.hdfs.tools.TestStoragePolicyCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.211 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSZKFailoverController
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSAdmin
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.168 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.506 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForXAttr
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.509 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.041 sec - in 
org.apache.hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForAcl
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.642 sec - in 
org.apache.hadoop.hdfs.tools.TestDFSHAAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.93 sec - in 
org.apache.hadoop.hdfs.tools.TestDelegationTokenFetcher
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestDebugAdmin
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.378 sec - in 
org.apache.hadoop.hdfs.tools.TestDebugAdmin
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.tools.TestGetGroups
Tes

Build failed in Jenkins: Hadoop-Hdfs-trunk #2192

2015-07-23 Thread Apache Jenkins Server
See 

Changes:

[cnauroth] HADOOP-12239. StorageException complaining " no lease ID" when 
updating FolderLastModifiedTime in WASB. Contributed by Duo Xu.

[cmccabe] HADOOP-12184. Remove unused Linux-specific constants in NativeIO 
(Martin Walsh via Colin P. McCabe)

[wangda] YARN-3932. SchedulerApplicationAttempt#getResourceUsageReport and 
UserInfo should based on total-used-resources. (Bibin A Chundatt via wangda)

[rohithsharmaks] YARN-3954. Fix 
TestYarnConfigurationFields#testCompareConfigurationClassAgainstXml. (varun 
saxena via rohithsharmaks)

[wangda] YARN-3956. Fix TestNodeManagerHardwareUtils fails on Mac (Varun 
Vasudev via wangda)

[jing9] HDFS-8797. WebHdfsFileSystem creates too many connections for pread. 
Contributed by Jing Zhao.

[junping_du] YARN-2019. Retrospect on decision of making RM crashed if any 
exception throw in ZKRMStateStore. Contributed by Jian He.

--
[...truncated 6690 lines...]
Running org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.224 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataStorage
Running org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.084 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataDirs
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.854 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeStartupOptions
Running org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.667 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestFsDatasetCacheRevocation
Running org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.164 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes
Running org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 51.268 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport
Running org.apache.hadoop.hdfs.server.datanode.TestStorageReport
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.051 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestStorageReport
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeFSDataSetSink
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.051 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeFSDataSetSink
Running org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 11.408 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDirectoryScanner
Running org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.833 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestTransferRbw
Running org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Tests run: 12, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 43.73 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockScanner
Running org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.03 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations
Running org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 22.279 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestReadOnlySharedStorage
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.169 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeProtocolRetryPolicy
Running org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.565 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockPoolManager
Running org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.777 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestDatanodeRegister
Running org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 12.571 sec - 
in org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery
Running org.apache.hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.302 sec - in 
org.apache.hadoop.hdfs.server.datanode.TestBlockPoolSliceStorage
Running 
org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapse

Hadoop-Hdfs-trunk - Build # 2192 - Still Failing

2015-07-23 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2192/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 6883 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:13 min]
[INFO] Apache Hadoop HDFS  FAILURE [  01:25 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.103 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 01:28 h
[INFO] Finished at: 2015-07-23T13:13:31+00:00
[INFO] Final Memory: 78M/1129M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs
 && /home/jenkins/tools/java/jdk1.7.0_55/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5897174978882487072.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire5081073529451731804tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_986963871481195269539tmp
[ERROR] -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn  -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2181
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 3838320 bytes
Compression is 0.0%
Took 17 sec
Recording test results
Updating YARN-2019
Updating YARN-3954
Updating YARN-3932
Updating HDFS-8797
Updating HADOOP-12184
Updating HADOOP-12239
Updating YARN-3956
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
2 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.TestNameNodeRespectsBindHostKeys.testHttpsBindHostKey

Error Message:
org/apache/hadoop/security/ssl/SSLFactory$Mode

Stack Trace:
java.lang.NoClassDefFoundError: org/apache/hadoop/security/ssl/SSLFactory$Mode
at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.createClientSSLConfig(KeyStoreTestUtil.java:291)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.setupSSLConfig(KeyStoreTestUtil.java:264)
at 
org.apache.hadoop.security.ssl.KeyStoreTestUtil.setupSSLConfig(KeyStoreTestUtil.java:208)

[jira] [Created] (HDFS-8813) Erasure Coding: Client no need to decode missing parity blocks

2015-07-23 Thread Walter Su (JIRA)
Walter Su created HDFS-8813:
---

 Summary: Erasure Coding: Client no need to decode missing parity 
blocks
 Key: HDFS-8813
 URL: https://issues.apache.org/jira/browse/HDFS-8813
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su
Priority: Minor


Assume 6+3 schema.
Assume data block #2 is missing, then InputStream tries to read parity block #6.
Assume parity block #6 is missing too. And InputStream successfully reads 
parity block #7.
Then begin to decode.

Currently InputStream will decode #2 and #6. But client(user) only need #2. The 
parity block #6 will be disposed.

The improvement is we only decode #2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8811) Move BlockStoragePolicy name's constants from HdfsServerConstants.java to HdfsConstants.java

2015-07-23 Thread Vinayakumar B (JIRA)
Vinayakumar B created HDFS-8811:
---

 Summary: Move BlockStoragePolicy name's constants from 
HdfsServerConstants.java to HdfsConstants.java
 Key: HDFS-8811
 URL: https://issues.apache.org/jira/browse/HDFS-8811
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: Vinayakumar B
Assignee: Vinayakumar B


Currently {{HdfsServerConstants.java}} have following constants, 
{code}  String HOT_STORAGE_POLICY_NAME = "HOT";
  String WARM_STORAGE_POLICY_NAME = "WARM";
  String COLD_STORAGE_POLICY_NAME = "COLD";{code}

and {{HdfsConstants.java}} have the following
{code}  public static final String MEMORY_STORAGE_POLICY_NAME = "LAZY_PERSIST";
  public static final String ALLSSD_STORAGE_POLICY_NAME = "ALL_SSD";
  public static final String ONESSD_STORAGE_POLICY_NAME = "ONE_SSD";{code}

It would be better to move all these to one place HdfsConstants.java, which 
client APIs also could access since this presents in hdfs-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8812) TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails

2015-07-23 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8812:
---

 Summary: TestDistributedFileSystem#testDFSClientPeerWriteTimeout 
fails
 Key: HDFS-8812
 URL: https://issues.apache.org/jira/browse/HDFS-8812
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Affects Versions: 2.8.0
Reporter: Akira AJISAKA


TestDistributedFileSystem#testDFSClientPeerWriteTimeout fails.
{noformat}
Running org.apache.hadoop.hdfs.TestDistributedFileSystem
Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 50.038 sec <<< 
FAILURE! - in org.apache.hadoop.hdfs.TestDistributedFileSystem
testDFSClientPeerWriteTimeout(org.apache.hadoop.hdfs.TestDistributedFileSystem) 
 Time elapsed: 0.66 sec  <<< FAILURE!
java.lang.AssertionError: wrong exception:java.lang.AssertionError: write 
should timeout
at org.junit.Assert.fail(Assert.java:88)
at 
org.apache.hadoop.hdfs.TestDistributedFileSystem.testDFSClientPeerWriteTimeout(TestDistributedFileSystem.java:1206)
{noformat}
See 
https://builds.apache.org/job/PreCommit-HDFS-Build/11783/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/
 and 
https://builds.apache.org/job/PreCommit-HDFS-Build/11786/testReport/org.apache.hadoop.hdfs/TestDistributedFileSystem/testDFSClientPeerWriteTimeout/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)