[jira] [Resolved] (HDFS-8847) change TestHDFSContractAppend to not override testRenameFileBeingAppended method.

2015-07-31 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu resolved HDFS-8847.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.
 -

 Key: HDFS-8847
 URL: https://issues.apache.org/jira/browse/HDFS-8847
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0


 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HDFS-8847) change TestHDFSContractAppend to not override testRenameFileBeingAppended method.

2015-07-31 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu reopened HDFS-8847:
-

 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.
 -

 Key: HDFS-8847
 URL: https://issues.apache.org/jira/browse/HDFS-8847
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0


 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HDFS-8847) change TestHDFSContractAppend to not override testRenameFileBeingAppended method.

2015-07-31 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu resolved HDFS-8847.
-
Resolution: Fixed

 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.
 -

 Key: HDFS-8847
 URL: https://issues.apache.org/jira/browse/HDFS-8847
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu
 Fix For: 2.8.0


 change TestHDFSContractAppend to not override testRenameFileBeingAppended 
 method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Hadoop-Hdfs-trunk - Build # 2200 - Failure

2015-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2200/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7452 lines...]
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Skipping javadoc generation
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:11 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:41 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.061 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:44 h
[INFO] Finished at: 2015-07-31T14:19:11+00:00
[INFO] Final Memory: 54M/767M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: There are test failures.
[ERROR] 
[ERROR] Please refer to 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports
 for the individual test results.
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk #2199
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 3842100 bytes
Compression is 0.0%
Took 14 sec
Recording test results
Updating MAPREDUCE-6433
Updating YARN-3971
Updating HDFS-8821
Updating HDFS-7192
Updating YARN-3963
Updating YARN-433
Updating HADOOP-12271
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
1 tests failed.
REGRESSION:  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testDatanodeRestarts

Error Message:
expected:0 but was:2

Stack Trace:
java.lang.AssertionError: expected:0 but was:2
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyIsHot.testDatanodeRestarts(TestStandbyIsHot.java:188)




[jira] [Created] (HDFS-8844) TestHDFSCLI does not cleanup the test directory

2015-07-31 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HDFS-8844:
---

 Summary: TestHDFSCLI does not cleanup the test directory
 Key: HDFS-8844
 URL: https://issues.apache.org/jira/browse/HDFS-8844
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: Akira AJISAKA
Priority: Minor


If TestHDFSCLI is executed twice without {{mvn clean}}, the second try fails. 
Here are the failing test cases:
{noformat}
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(231)) - Failing tests:
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(232)) - --
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 226: get: getting non 
existent(absolute path)
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 227: get: getting non existent 
file(relative path)
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 228: get: Test for hdfs:// path - 
getting non existent
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 229: get: Test for Namenode's path - 
getting non existent
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 250: copyToLocal: non existent 
relative path
2015-07-31 21:35:17,654 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 251: copyToLocal: non existent 
absolute path
2015-07-31 21:35:17,655 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 252: copyToLocal: Test for hdfs:// 
path - non existent file/directory
2015-07-31 21:35:17,655 [main] INFO  cli.CLITestHelper 
(CLITestHelper.java:displayResults(238)) - 253: copyToLocal: Test for 
Namenode's path - non existent file/directory
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Hdfs-trunk #2200

2015-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk/2200/changes

Changes:

[wangda] YARN-3963. AddNodeLabel on duplicate label addition shows success. 
(Bibin A Chundatt via wangda)

[wangda] YARN-3971. Skip 
RMNodeLabelsManager#checkRemoveFromClusterNodeLabelsOfQueue on nodelabel 
recovery. (Bibin A Chundatt via wangda)

[arp] HDFS-7192. DN should ignore lazyPersist hint if the writer is not local. 
(Contributed by Arpit Agarwal)

[harsh] HDFS-8821. Explain message Operation category X is not supported in 
state standby. Contributed by Gautam Gopalakrishnan.

[harsh] HADOOP-12271. Hadoop Jar Error Should Be More Explanatory. Contributed 
by Josh Elser.

[zxu] YARN-433. When RM is catching up with node updates then it should not 
expire acquired containers. Contributed by Xuan Gong

[zxu] MAPREDUCE-6433. launchTime may be negative. Contributed by Zhihai Xu

--
[...truncated 7259 lines...]
Running org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.891 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitLegacyRead
Running org.apache.hadoop.hdfs.TestClose
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.46 sec - in 
org.apache.hadoop.hdfs.TestClose
Running org.apache.hadoop.hdfs.TestDFSMkdirs
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.454 sec - in 
org.apache.hadoop.hdfs.TestDFSMkdirs
Running org.apache.hadoop.hdfs.TestFileCreationEmpty
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.26 sec - in 
org.apache.hadoop.hdfs.TestFileCreationEmpty
Running org.apache.hadoop.hdfs.TestWriteRead
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 25.833 sec - in 
org.apache.hadoop.hdfs.TestWriteRead
Running org.apache.hadoop.hdfs.TestMiniDFSCluster
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.271 sec - in 
org.apache.hadoop.hdfs.TestMiniDFSCluster
Running org.apache.hadoop.hdfs.TestDFSFinalize
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.465 sec - in 
org.apache.hadoop.hdfs.TestDFSFinalize
Running org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Tests run: 44, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 53.409 sec - 
in org.apache.hadoop.hdfs.TestHDFSFileSystemContract
Running org.apache.hadoop.hdfs.TestEncryptionZones
Tests run: 19, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 39.639 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZones
Running org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 8.597 sec - in 
org.apache.hadoop.hdfs.TestRollingUpgradeDowngrade
Running org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.916 sec - in 
org.apache.hadoop.hdfs.TestParallelShortCircuitReadUnCached
Running org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 34.329 sec - in 
org.apache.hadoop.hdfs.TestClientProtocolForPipelineRecovery
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.432 sec - in 
org.apache.hadoop.hdfs.TestEncryptionZonesWithHA
Running org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.329 sec - in 
org.apache.hadoop.hdfs.TestWriteBlockGetsBlockLengthHint
Running org.apache.hadoop.hdfs.TestDFSUpgrade
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 14.425 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgrade
Running org.apache.hadoop.hdfs.TestSnapshotCommands
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.43 sec - in 
org.apache.hadoop.hdfs.TestSnapshotCommands
Running org.apache.hadoop.hdfs.TestDataTransferProtocol
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.23 sec - in 
org.apache.hadoop.hdfs.TestDataTransferProtocol
Running org.apache.hadoop.hdfs.TestFileCreation
Tests run: 23, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 81.432 sec - 
in org.apache.hadoop.hdfs.TestFileCreation
Running org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 17.954 sec - in 
org.apache.hadoop.hdfs.TestDFSInotifyEventInputStream
Running org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.197 sec - in 
org.apache.hadoop.hdfs.TestDFSUpgradeFromImage
Running org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 54.189 sec - 
in org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS
Running org.apache.hadoop.hdfs.TestDecommission
Tests run: 16, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 136.959 sec - 
in org.apache.hadoop.hdfs.TestDecommission
Running org.apache.hadoop.hdfs.TestDFSUtil
Tests run: 31, 

Hadoop-Hdfs-trunk-Java8 - Build # 262 - Still Failing

2015-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/262/

###
## LAST 60 LINES OF THE CONSOLE 
###
[...truncated 7879 lines...]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ 
hadoop-hdfs-project ---
[INFO] Not executing Javadoc as the project is not a Java classpath-capable 
package
[INFO] 
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ 
hadoop-hdfs-project ---
[INFO] 
[INFO] Reactor Summary:
[INFO] 
[INFO] Apache Hadoop HDFS Client . SUCCESS [03:56 min]
[INFO] Apache Hadoop HDFS  FAILURE [  02:13 h]
[INFO] Apache Hadoop HttpFS .. SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal . SKIPPED
[INFO] Apache Hadoop HDFS-NFS  SKIPPED
[INFO] Apache Hadoop HDFS Project  SUCCESS [  0.052 s]
[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 02:16 h
[INFO] Finished at: 2015-07-31T13:53:54+00:00
[INFO] Final Memory: 75M/857M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on 
project hadoop-hdfs: ExecutionException: java.lang.RuntimeException: The forked 
VM terminated without properly saying goodbye. VM crash or System.exit called?
[ERROR] Command was /bin/sh -c cd 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs
  /home/jenkins/tools/java/jdk1.8.0/jre/bin/java -Xmx4096m 
-XX:MaxPermSize=768m -XX:+HeapDumpOnOutOfMemoryError -jar 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefirebooter5409876289527765923.jar
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire7018464538277241512tmp
 
/home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire/surefire_2243427345975889557337tmp
[ERROR] - [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn goals -rf :hadoop-hdfs
Build step 'Execute shell' marked build as failure
Archiving artifacts
Sending artifact delta relative to Hadoop-Hdfs-trunk-Java8 #222
Archived 1 artifacts
Archive block size is 32768
Received 0 blocks and 4328094 bytes
Compression is 0.0%
Took 19 sec
Recording test results
Updating MAPREDUCE-6433
Updating YARN-3971
Updating HDFS-8821
Updating HDFS-7192
Updating YARN-3963
Updating YARN-433
Updating HADOOP-12271
Sending e-mails to: hdfs-dev@hadoop.apache.org
Email was triggered for: Failure
Sending email for trigger: Failure



###
## FAILED TESTS (if any) 
##
13 tests failed.
FAILED:  
org.apache.hadoop.hdfs.server.namenode.TestFileTruncate.testTruncateFailure

Error Message:
java.util.zip.ZipException: invalid code lengths set

Stack Trace:
java.lang.RuntimeException: java.util.zip.ZipException: invalid code lengths set
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:164)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:122)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.xerces.impl.XMLEntityManager$RewindableInputStream.read(Unknown 
Source)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown 
Source)
at 
org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown 

Build failed in Jenkins: Hadoop-Hdfs-trunk-Java8 #262

2015-07-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/262/changes

Changes:

[wangda] YARN-3963. AddNodeLabel on duplicate label addition shows success. 
(Bibin A Chundatt via wangda)

[wangda] YARN-3971. Skip 
RMNodeLabelsManager#checkRemoveFromClusterNodeLabelsOfQueue on nodelabel 
recovery. (Bibin A Chundatt via wangda)

[arp] HDFS-7192. DN should ignore lazyPersist hint if the writer is not local. 
(Contributed by Arpit Agarwal)

[harsh] HDFS-8821. Explain message Operation category X is not supported in 
state standby. Contributed by Gautam Gopalakrishnan.

[harsh] HADOOP-12271. Hadoop Jar Error Should Be More Explanatory. Contributed 
by Josh Elser.

[zxu] YARN-433. When RM is catching up with node updates then it should not 
expire acquired containers. Contributed by Xuan Gong

[zxu] MAPREDUCE-6433. launchTime may be negative. Contributed by Zhihai Xu

--
[...truncated 7686 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.219 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestListCorruptFileBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.512 sec - in 
org.apache.hadoop.hdfs.server.namenode.TestBlockPlacementPolicyRackFaultTolerant
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 28.196 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithEncryptedTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 19.85 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithMultipleNameNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 42.945 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancer
Tests run: 26, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 278.709 sec - 
in org.apache.hadoop.hdfs.server.balancer.TestBalancer
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.917 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.023 sec - in 
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithNodeGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 52.707 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlocksWithNotEnoughRacks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 38.935 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 21.674 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.904 sec - in 
org.apache.hadoop.hdfs.server.blockmanagement.TestAvailableSpaceBlockPlacementPolicy
Java HotSpot(TM) 64-Bit 

Re: Planning Hadoop 2.6.1 release

2015-07-31 Thread Akira AJISAKA
Thanks Joep and your team members for creating the list. I really 
appreciate your work. I looked your 'not yet marked with 
2.6.1-candidate' list and categorized them.


1) Now marked as 2.6.1-candidate and I agree with you to keep it marked.

* HDFS-7213
* HDFS-7788
* HDFS-7884
* HDFS-7930
* YARN-2856
* YARN-3222
* YARN-3238
* YARN-3464
* YARN-3526
* YARN-3850

Thanks Sangjin for creating patches for branch-2.6.

2) Not yet marked as 2.6.1-candidate that I'd like to see in 2.6.1

* HDFS-7182
* HDFS-7314
* HDFS-7704
* HDFS-7894
* HDFS-7929 and HDFS-8480 (they are related)
* HDFS-7980
* HDFS-8245
* HDFS-8270
* HDFS-8404
* HDFS-8486
* MAPREDUCE-5465
* MAPREDUCE-5649
* MAPREDUCE-6166
* MAPREDUCE-6238
* MAPREDUCE-6300
* YARN-2952
* YARN-2997
* YARN-3094
* YARN-3176 (to be fixed soon, I think)
* YARN-3231
* HADOOP-11295
* HADOOP-11812

3) Not yet marked as 2.6.1-candidate. I'd like to drop

* HDFS-7281 (incompatible change)
* HDFS-7446 (this looks to be an improvement)
* HDFS-7916 (cannot apply to branch-2.6 as Sangjin mentioned)

Hi Vinod, could you mark the issues in 2) as 2.6.1-candidate?

I'd like to freeze the candidate list in about 7 days and start 
backporting them. Do you have any thoughts?


Regards,
Akira

On 7/25/15 10:32, Sangjin Lee wrote:

Out of the JIRAs we proposed, please remove HDFS-7916. I don't think it
applies to 2.6.

Thanks,
Sangjin

On Wed, Jul 22, 2015 at 4:02 PM, Vinod Kumar Vavilapalli 
vino...@hortonworks.com wrote:


I’ve added them all to the 2.6.1-candidate list. I included everything
even though some of them are major tickets. The list is getting large, we
will have to cut these down once we get down to the next phase of figuring
out what to include and what not to.

Thanks
+Vinod


On Jul 21, 2015, at 2:15 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp

wrote:


Thanks Vinod for updating the candidate list.
I'd like to include the followings 12 JIRAs:

* YARN-3641
* YARN-3585
* YARN-2910
* HDFS-8431
* HDFS-7830
* HDFS-7763
* HDFS-7742
* HDFS-7235
* HDFS-7225
* MAPREDUCE-6324
* HADOOP-11934
* HADOOP-11491

Thanks,
Akira

On 7/18/15 11:13, Vinod Kumar Vavilapalli wrote:

  - I also have a bunch of patches that I’d like to include, will update

them right away.


I’ve just finished this. The latest 2.6.1-candidate list is up at 64

JIRAs.


Others, please look at the list and post anything else you’d like to

get included for 2.6.1.


Thanks
+Vinod


On Jul 15, 2015, at 6:24 PM, Vinod Kumar Vavilapalli 

vino...@hortonworks.commailto:vino...@hortonworks.com wrote:


Alright, I’d like to make progress while the issue is hot.

I created a label to discuss on the candidate list of patches:

https://issues.apache.org/jira/issues/?jql=labels%20%3D%202.6.1-candidate
https://issues.apache.org/jira/issues/?jql=labels%20=%202.6.1-candidate


Next steps, I’ll do the following
  - Review 2.7 and 2.8 blocker/critical tickets and see what makes sense

for 2.6.1 and add as candidates

  - I haven’t reviewed the current list yet, the seed list is from this

email thread. Will review them.

  - I also have a bunch of patches that I’d like to include, will update

them right away.


Others, please look at the current list and let me know what else you’d

like to include.


I’d like to keep this ‘candidate-collection’ cycle’ for a max of a week

and then start the release process. @Akira, let’s sync up offline on how to
take this forward in terms of the release process.


Thanks
+Vinod














Re: Planning Hadoop 2.6.1 release

2015-07-31 Thread Sangjin Lee
Thanks Akira.

I'd like to make one small correction. If we're getting HDFS-7704, then we
should also get HDFS-7916. My earlier comment was assuming HDFS-7704 was
not included in the list. But if is (and I think it should), then we should
also get HDFS-7916 as it addresses an important issue related to HDFS-7704.
Hope it makes it clear.

Sangjin

On Fri, Jul 31, 2015 at 10:01 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp
wrote:

 Thanks Joep and your team members for creating the list. I really
 appreciate your work. I looked your 'not yet marked with 2.6.1-candidate'
 list and categorized them.

 1) Now marked as 2.6.1-candidate and I agree with you to keep it marked.

 * HDFS-7213
 * HDFS-7788
 * HDFS-7884
 * HDFS-7930
 * YARN-2856
 * YARN-3222
 * YARN-3238
 * YARN-3464
 * YARN-3526
 * YARN-3850

 Thanks Sangjin for creating patches for branch-2.6.

 2) Not yet marked as 2.6.1-candidate that I'd like to see in 2.6.1

 * HDFS-7182
 * HDFS-7314
 * HDFS-7704
 * HDFS-7894
 * HDFS-7929 and HDFS-8480 (they are related)
 * HDFS-7980
 * HDFS-8245
 * HDFS-8270
 * HDFS-8404
 * HDFS-8486
 * MAPREDUCE-5465
 * MAPREDUCE-5649
 * MAPREDUCE-6166
 * MAPREDUCE-6238
 * MAPREDUCE-6300
 * YARN-2952
 * YARN-2997
 * YARN-3094
 * YARN-3176 (to be fixed soon, I think)
 * YARN-3231
 * HADOOP-11295
 * HADOOP-11812

 3) Not yet marked as 2.6.1-candidate. I'd like to drop

 * HDFS-7281 (incompatible change)
 * HDFS-7446 (this looks to be an improvement)
 * HDFS-7916 (cannot apply to branch-2.6 as Sangjin mentioned)

 Hi Vinod, could you mark the issues in 2) as 2.6.1-candidate?

 I'd like to freeze the candidate list in about 7 days and start
 backporting them. Do you have any thoughts?

 Regards,
 Akira


 On 7/25/15 10:32, Sangjin Lee wrote:

 Out of the JIRAs we proposed, please remove HDFS-7916. I don't think it
 applies to 2.6.

 Thanks,
 Sangjin

 On Wed, Jul 22, 2015 at 4:02 PM, Vinod Kumar Vavilapalli 
 vino...@hortonworks.com wrote:

 I’ve added them all to the 2.6.1-candidate list. I included everything
 even though some of them are major tickets. The list is getting large, we
 will have to cut these down once we get down to the next phase of
 figuring
 out what to include and what not to.

 Thanks
 +Vinod

 On Jul 21, 2015, at 2:15 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp

 wrote:


 Thanks Vinod for updating the candidate list.
 I'd like to include the followings 12 JIRAs:

 * YARN-3641
 * YARN-3585
 * YARN-2910
 * HDFS-8431
 * HDFS-7830
 * HDFS-7763
 * HDFS-7742
 * HDFS-7235
 * HDFS-7225
 * MAPREDUCE-6324
 * HADOOP-11934
 * HADOOP-11491

 Thanks,
 Akira

 On 7/18/15 11:13, Vinod Kumar Vavilapalli wrote:

   - I also have a bunch of patches that I’d like to include, will
 update

 them right away.


 I’ve just finished this. The latest 2.6.1-candidate list is up at 64

 JIRAs.


 Others, please look at the list and post anything else you’d like to

 get included for 2.6.1.


 Thanks
 +Vinod


 On Jul 15, 2015, at 6:24 PM, Vinod Kumar Vavilapalli 

 vino...@hortonworks.commailto:vino...@hortonworks.com wrote:


 Alright, I’d like to make progress while the issue is hot.

 I created a label to discuss on the candidate list of patches:


 https://issues.apache.org/jira/issues/?jql=labels%20%3D%202.6.1-candidate
 
 https://issues.apache.org/jira/issues/?jql=labels%20=%202.6.1-candidate


 Next steps, I’ll do the following
   - Review 2.7 and 2.8 blocker/critical tickets and see what makes
 sense

 for 2.6.1 and add as candidates

   - I haven’t reviewed the current list yet, the seed list is from this

 email thread. Will review them.

   - I also have a bunch of patches that I’d like to include, will update

 them right away.


 Others, please look at the current list and let me know what else you’d

 like to include.


 I’d like to keep this ‘candidate-collection’ cycle’ for a max of a week

 and then start the release process. @Akira, let’s sync up offline on
 how to
 take this forward in terms of the release process.


 Thanks
 +Vinod











[jira] [Resolved] (HDFS-8202) Improve end to end stirpping file test to add erasure recovering test

2015-07-31 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8202.
-
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: HDFS-7285
Target Version/s: HDFS-7285

+1 on the latest patch. I just committed to the branch. Thanks Xinwei for the 
contribution!

 Improve end to end stirpping file test to add erasure recovering test
 -

 Key: HDFS-8202
 URL: https://issues.apache.org/jira/browse/HDFS-8202
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Xinwei Qin 
 Fix For: HDFS-7285

 Attachments: HDFS-8202-HDFS-7285.003.patch, 
 HDFS-8202-HDFS-7285.004.patch, HDFS-8202-HDFS-7285.005.patch, 
 HDFS-8202-HDFS-7285.006.patch, HDFS-8202.001.patch, HDFS-8202.002.patch


 This to follow on HDFS-8201 to add erasure recovering test in the end to end 
 stripping file test:
 * After writing certain blocks to the test file, delete some block file;
 * Read the file content and compare, see if any recovering issue, or verify 
 the erasure recovering works or not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8845) DiskChecker should not traverse entire tree

2015-07-31 Thread Chang Li (JIRA)
Chang Li created HDFS-8845:
--

 Summary: DiskChecker should not traverse entire tree
 Key: HDFS-8845
 URL: https://issues.apache.org/jira/browse/HDFS-8845
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Chang Li
Assignee: Chang Li
 Attachments: HDFS-8845.patch





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8846) Create edit log files with old layout version for upgrade testing

2015-07-31 Thread Zhe Zhang (JIRA)
Zhe Zhang created HDFS-8846:
---

 Summary: Create edit log files with old layout version for upgrade 
testing
 Key: HDFS-8846
 URL: https://issues.apache.org/jira/browse/HDFS-8846
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 2.7.1
Reporter: Zhe Zhang
Assignee: Zhe Zhang


Per discussion under HDFS-8480, we should create some edit log files with old 
layout version, to test whether they can be correctly handled in upgrades.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8840) Inconsistent log level practice

2015-07-31 Thread songwanging (JIRA)
songwanging created HDFS-8840:
-

 Summary: Inconsistent log level practice
 Key: HDFS-8840
 URL: https://issues.apache.org/jira/browse/HDFS-8840
 Project: Hadoop HDFS
  Issue Type: Improvement
Affects Versions: 2.7.1, 2.5.2, 2.5.1, 2.6.0
Reporter: songwanging
Priority: Minor


In method checkLogsAvailableForRead() of class: 
hadoop-2.7.1-src\hadoop-hdfs-project\hadoop-hdfs\src\main\java\org\apache\hadoop\hdfs\server\namenode\ha\BootstrapStandby.java

The log level is not correct, after checking LOG.isDebugEnabled(), we should 
use LOG.debug(msg, e);, while now we use  LOG.fatal(msg, e);. Log level is 
inconsistent.

the source code of this method is:
private boolean checkLogsAvailableForRead(FSImage image, long imageTxId, long 
curTxIdOnOtherNode) {

  ...
} catch (IOException e) {
   ...
  if (LOG.isDebugEnabled()) {
LOG.fatal(msg, e);
  } else {
LOG.fatal(msg);
  }
  return false;
}
  }




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8842) Catch throwable

2015-07-31 Thread songwanging (JIRA)
songwanging created HDFS-8842:
-

 Summary: Catch throwable 
 Key: HDFS-8842
 URL: https://issues.apache.org/jira/browse/HDFS-8842
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: songwanging
Priority: Critical


We came across a few instances where the code catches Throwable, but fails to 
rethrow anything.
Throwable is the parent type of Exception and Error, so catching Throwable 
means catching both Exceptions as well as Errors. An Exception is something you 
could recover (like IOException), an Error is something more serious and 
usually you could'nt recover easily (like ClassNotFoundError) so it doesn't 
make much sense to catch an Error. 
We should convert Throwable to Exception.

For example:

In method tryGetPid(Process p) of class: 
hadoop-2.7.1-src\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\ha\ShellCommandFencer.java

code:
private static String tryGetPid(Process p) {
try {
...
} catch (Throwable t) {
  LOG.trace(Unable to determine pid for  + p, t);
  return null;
}
  }

In method uncaughtException(Thread t, Throwable e) of class: 
hadoop-2.7.1-src\hadoop-yarn-project\hadoop-yarn\hadoop-yarn-common\src\main\java\org\apache\hadoop\yarn\YarnUncaughtExceptionHandler.java

code:
public void uncaughtException(Thread t, Throwable e) {
   ...
  try {
LOG.fatal(Thread  + t +  threw an Error.  Shutting down now..., e);
  } catch (Throwable err) {
//We don't want to not exit because of an issue with logging
  }
...
try {
  System.err.println(Halting due to Out Of Memory Error...);
} catch (Throwable err) {
  //Again we done want to exit because of logging issues.
}
 ...
}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8841) Catch throwable return null

2015-07-31 Thread songwanging (JIRA)
songwanging created HDFS-8841:
-

 Summary: Catch throwable return null
 Key: HDFS-8841
 URL: https://issues.apache.org/jira/browse/HDFS-8841
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: songwanging
Priority: Minor


In method map of class: 
\hadoop-2.7.1-src\hadoop-tools\hadoop-extras\src\main\java\org\apache\hadoop\tools\DistCpV1.java.

This method has this code:

 public void map(LongWritable key,
FilePair value,
OutputCollectorWritableComparable?, Text out,
Reporter reporter) throws IOException {
 ...
} catch (Throwable ex) {
  // ignore, we are just cleaning up
  LOG.debug(Ignoring cleanup exception, ex);
}
   
  }
} 
...
}

Throwable is the parent type of Exception and Error, so catching Throwable 
means catching both Exceptions as well as Errors. An Exception is something you 
could recover (like IOException), an Error is something more serious and 
usually you could'nt recover easily (like ClassNotFoundError) so it doesn't 
make much sense to catch an Error.

We should  convert to catch Exception instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8847) change TestHDFSContractAppend to not override testRenameFileBeingAppended method.

2015-07-31 Thread zhihai xu (JIRA)
zhihai xu created HDFS-8847:
---

 Summary: change TestHDFSContractAppend to not override 
testRenameFileBeingAppended method.
 Key: HDFS-8847
 URL: https://issues.apache.org/jira/browse/HDFS-8847
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: test
Reporter: zhihai xu
Assignee: zhihai xu


change TestHDFSContractAppend to not override testRenameFileBeingAppended 
method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Planning Hadoop 2.6.1 release

2015-07-31 Thread Sangjin Lee
Just for the completeness, the following JIRAs have already been committed
to branch-2.6 and thus will be part of 2.6.1:

HADOOP-11307
YARN-2375
HDFS-7425
HDFS-4882
HDFS-7489
HDFS-7503
HDFS-7443
HDFS-3443
HADOOP-11466
HDFS-7733
MAPREDUCE-6237
YARN-3251
HDFS-6153 reverted


On Fri, Jul 31, 2015 at 10:07 AM, Sangjin Lee sj...@apache.org wrote:

 Thanks Akira.

 I'd like to make one small correction. If we're getting HDFS-7704, then we
 should also get HDFS-7916. My earlier comment was assuming HDFS-7704 was
 not included in the list. But if is (and I think it should), then we should
 also get HDFS-7916 as it addresses an important issue related to HDFS-7704.
 Hope it makes it clear.

 Sangjin

 On Fri, Jul 31, 2015 at 10:01 AM, Akira AJISAKA 
 ajisa...@oss.nttdata.co.jp wrote:

 Thanks Joep and your team members for creating the list. I really
 appreciate your work. I looked your 'not yet marked with 2.6.1-candidate'
 list and categorized them.

 1) Now marked as 2.6.1-candidate and I agree with you to keep it marked.

 * HDFS-7213
 * HDFS-7788
 * HDFS-7884
 * HDFS-7930
 * YARN-2856
 * YARN-3222
 * YARN-3238
 * YARN-3464
 * YARN-3526
 * YARN-3850

 Thanks Sangjin for creating patches for branch-2.6.

 2) Not yet marked as 2.6.1-candidate that I'd like to see in 2.6.1

 * HDFS-7182
 * HDFS-7314
 * HDFS-7704
 * HDFS-7894
 * HDFS-7929 and HDFS-8480 (they are related)
 * HDFS-7980
 * HDFS-8245
 * HDFS-8270
 * HDFS-8404
 * HDFS-8486
 * MAPREDUCE-5465
 * MAPREDUCE-5649
 * MAPREDUCE-6166
 * MAPREDUCE-6238
 * MAPREDUCE-6300
 * YARN-2952
 * YARN-2997
 * YARN-3094
 * YARN-3176 (to be fixed soon, I think)
 * YARN-3231
 * HADOOP-11295
 * HADOOP-11812

 3) Not yet marked as 2.6.1-candidate. I'd like to drop

 * HDFS-7281 (incompatible change)
 * HDFS-7446 (this looks to be an improvement)
 * HDFS-7916 (cannot apply to branch-2.6 as Sangjin mentioned)

 Hi Vinod, could you mark the issues in 2) as 2.6.1-candidate?

 I'd like to freeze the candidate list in about 7 days and start
 backporting them. Do you have any thoughts?

 Regards,
 Akira


 On 7/25/15 10:32, Sangjin Lee wrote:

 Out of the JIRAs we proposed, please remove HDFS-7916. I don't think it
 applies to 2.6.

 Thanks,
 Sangjin

 On Wed, Jul 22, 2015 at 4:02 PM, Vinod Kumar Vavilapalli 
 vino...@hortonworks.com wrote:

 I’ve added them all to the 2.6.1-candidate list. I included everything
 even though some of them are major tickets. The list is getting large,
 we
 will have to cut these down once we get down to the next phase of
 figuring
 out what to include and what not to.

 Thanks
 +Vinod

 On Jul 21, 2015, at 2:15 AM, Akira AJISAKA ajisa...@oss.nttdata.co.jp

 wrote:


 Thanks Vinod for updating the candidate list.
 I'd like to include the followings 12 JIRAs:

 * YARN-3641
 * YARN-3585
 * YARN-2910
 * HDFS-8431
 * HDFS-7830
 * HDFS-7763
 * HDFS-7742
 * HDFS-7235
 * HDFS-7225
 * MAPREDUCE-6324
 * HADOOP-11934
 * HADOOP-11491

 Thanks,
 Akira

 On 7/18/15 11:13, Vinod Kumar Vavilapalli wrote:

   - I also have a bunch of patches that I’d like to include, will
 update

 them right away.


 I’ve just finished this. The latest 2.6.1-candidate list is up at 64

 JIRAs.


 Others, please look at the list and post anything else you’d like to

 get included for 2.6.1.


 Thanks
 +Vinod


 On Jul 15, 2015, at 6:24 PM, Vinod Kumar Vavilapalli 

 vino...@hortonworks.commailto:vino...@hortonworks.com wrote:


 Alright, I’d like to make progress while the issue is hot.

 I created a label to discuss on the candidate list of patches:


 https://issues.apache.org/jira/issues/?jql=labels%20%3D%202.6.1-candidate
 
 https://issues.apache.org/jira/issues/?jql=labels%20=%202.6.1-candidate
 


 Next steps, I’ll do the following
   - Review 2.7 and 2.8 blocker/critical tickets and see what makes
 sense

 for 2.6.1 and add as candidates

   - I haven’t reviewed the current list yet, the seed list is from this

 email thread. Will review them.

   - I also have a bunch of patches that I’d like to include, will
 update

 them right away.


 Others, please look at the current list and let me know what else
 you’d

 like to include.


 I’d like to keep this ‘candidate-collection’ cycle’ for a max of a
 week

 and then start the release process. @Akira, let’s sync up offline on
 how to
 take this forward in terms of the release process.


 Thanks
 +Vinod












[jira] [Resolved] (HDFS-8839) Erasure Coding: client occasionally gets less block locations when some datanodes fail

2015-07-31 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-8839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HDFS-8839.
-
Resolution: Duplicate

Thanks Bo for identifying this. I think this is a duplicate of HDFS-8220. 

 Erasure Coding: client occasionally gets less block locations when some 
 datanodes fail 
 ---

 Key: HDFS-8839
 URL: https://issues.apache.org/jira/browse/HDFS-8839
 Project: Hadoop HDFS
  Issue Type: Sub-task
Reporter: Li Bo
Assignee: Li Bo

 9 datanodes, write two block groups. A datanode dies when writing the first 
 block group. When client retrieves the second block group from namenode, the 
 returned block group only contains 8 locations occasionally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HDFS-8843) Hadoop dfs command with --verbose optoin

2015-07-31 Thread Neill Lima (JIRA)
Neill Lima created HDFS-8843:


 Summary: Hadoop dfs command with --verbose optoin
 Key: HDFS-8843
 URL: https://issues.apache.org/jira/browse/HDFS-8843
 Project: Hadoop HDFS
  Issue Type: Wish
  Components: fs
Affects Versions: 2.7.1
Reporter: Neill Lima
Priority: Minor


Generally when copying large files from/to HDFS using 
get/put/copyFromLocal/copyToLocal there is a lot going under the hood that we 
are not aware of. 

It would be handy to have a --verbose flag to show the status of the 
files/folders being copied at the moment, so we can have a rough ETA on 
completion. 

A good sample is the curl -O command.

Another option would be a recursive tree of files showing the progress of each 
completed/total (%). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)