[jira] [Commented] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195944#comment-16195944 ] Hadoop QA commented on HDFS-12542: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} hadoop-hdfs-project_hadoop-hdfs-httpfs generated 0 new + 51 unchanged - 1 fixed = 51 total (was 52) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 87m 34s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 33s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | |
[jira] [Commented] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195911#comment-16195911 ] Hadoop QA commented on HDFS-12556: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 47s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 11m 57s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 10m 21s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}161m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestEncryptedTransfer | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.tools.offlineImageViewer.TestOfflineImageViewerForContentSummary | | | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure210 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestReplaceDatanodeOnFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:14b5c93 | | JIRA Issue | HDFS-12556 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890875/HDFS-12556-HDFS-10285-02.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit
[jira] [Updated] (HDFS-12542) Update javadoc and documentation for listStatus
[ https://issues.apache.org/jira/browse/HDFS-12542?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12542: -- Attachment: HDFS-12542.02.patch > Update javadoc and documentation for listStatus > > > Key: HDFS-12542 > URL: https://issues.apache.org/jira/browse/HDFS-12542 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12542.01.patch, HDFS-12542.02.patch > > > Follow up jira to update javadoc and documentation for listStatus. > [HDFS-12162|https://issues.apache.org/jira/browse/HDFS-12162?focusedCommentId=16130910=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16130910] -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12570) [SPS]: Refactor Co-ordinator datanode logic to track the block storage movements
[ https://issues.apache.org/jira/browse/HDFS-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195883#comment-16195883 ] Hadoop QA commented on HDFS-12570: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 15 new or modified test files. {color} | || || || || {color:brown} HDFS-10285 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 42s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 55s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 27s{color} | {color:green} HDFS-10285 passed {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 12m 27s{color} | {color:red} branch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10285 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 51s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 11 new + 1142 unchanged - 2 fixed = 1153 total (was 1144) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} shadedclient {color} | {color:red} 1m 56s{color} | {color:red} patch has errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}107m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}151m 41s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithDecoding | | | hadoop.hdfs.server.namenode.TestPersistentStoragePolicySatisfier | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure090 | | | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy | | | hadoop.hdfs.TestFileAppendRestart | | | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010 | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure080 | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure | | | hadoop.hdfs.TestLeaseRecoveryStriped | | |
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195850#comment-16195850 ] Hadoop QA commented on HDFS-12519: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 21s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 37s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 10s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}104m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}154m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.server.namenode.ha.TestFailureToReadEdits | | | hadoop.hdfs.server.blockmanagement.TestReconstructStripedBlocksWithRackAwareness | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12519 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890870/HDFS-12519-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4298876e9336 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / e75393c | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195845#comment-16195845 ] Surendra Singh Lilhore commented on HDFS-12556: --- Attached v2 patch. This patch fixing two test case. # TestPersistentStoragePolicySatisfier#testWithRestarts() # TestPersistentStoragePolicySatisfier#testWithCheckpoint() First test is failing because of {{ArrayIndexOutOfBoundsException}} and it is coming because block analysis is done without read lock. Second test is failing because first test case. First test case timed out without waiting to shutdown the cluster. This is cause the inconsistency in namedir for second test case. > [SPS] : Block movement analysis should be done in read lock. > > > Key: HDFS-12556 > URL: https://issues.apache.org/jira/browse/HDFS-12556 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12556-HDFS-10285-01.patch, > HDFS-12556-HDFS-10285-02.patch > > > {noformat} > 2017-09-27 15:58:32,852 [StoragePolicySatisfier] ERROR > namenode.StoragePolicySatisfier > (StoragePolicySatisfier.java:handleException(308)) - StoragePolicySatisfier > thread received runtime exception. Stopping Storage policy satisfier work > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getStorages(BlockManager.java:4130) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN(StoragePolicySatisfier.java:362) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.run(StoragePolicySatisfier.java:236) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12556) [SPS] : Block movement analysis should be done in read lock.
[ https://issues.apache.org/jira/browse/HDFS-12556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Surendra Singh Lilhore updated HDFS-12556: -- Attachment: HDFS-12556-HDFS-10285-02.patch > [SPS] : Block movement analysis should be done in read lock. > > > Key: HDFS-12556 > URL: https://issues.apache.org/jira/browse/HDFS-12556 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-12556-HDFS-10285-01.patch, > HDFS-12556-HDFS-10285-02.patch > > > {noformat} > 2017-09-27 15:58:32,852 [StoragePolicySatisfier] ERROR > namenode.StoragePolicySatisfier > (StoragePolicySatisfier.java:handleException(308)) - StoragePolicySatisfier > thread received runtime exception. Stopping Storage policy satisfier work > java.lang.ArrayIndexOutOfBoundsException: 1 > at > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.getStorages(BlockManager.java:4130) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.analyseBlocksStorageMovementsAndAssignToDN(StoragePolicySatisfier.java:362) > at > org.apache.hadoop.hdfs.server.namenode.StoragePolicySatisfier.run(StoragePolicySatisfier.java:236) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12570) [SPS]: Refactor Co-ordinator datanode logic to track the block storage movements
[ https://issues.apache.org/jira/browse/HDFS-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-12570: Attachment: HDFS-12570-HDFS-10285-02.patch Attached another patch addressing the case of sharing 'maxTransferStreams', introduces new flag {{dfs.storage.policy.satisfier.share.equal.replication.max-streams}}. On enabling this flag, SPS block movement will equally sharing the max transfer with pending_replica/erasure-coded tasks. Otherwise gives high priority to the pending_replica/erasure-coded tasks and only the delta streams will be used for blocks to move tasks. > [SPS]: Refactor Co-ordinator datanode logic to track the block storage > movements > > > Key: HDFS-12570 > URL: https://issues.apache.org/jira/browse/HDFS-12570 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: datanode, namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-12570-HDFS-10285-00.patch, > HDFS-12570-HDFS-10285-01.patch, HDFS-12570-HDFS-10285-02.patch > > > This task is to refactor the C-DN block storage movements. Basically, the > idea is to move the scheduling and tracking logic to Namenode rather than at > the special C-DN. Please refer the discussion with [~andrew.wang] to > understand the [background and the necessity of > refactoring|https://issues.apache.org/jira/browse/HDFS-10285?focusedCommentId=16141060=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16141060]. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12598) Ozone: Fix 3 node ratis replication in Ozone
[ https://issues.apache.org/jira/browse/HDFS-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195797#comment-16195797 ] Hadoop QA commented on HDFS-12598: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 14s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 36s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 50s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-hdfs-project: The patch generated 1 new + 4 unchanged - 0 fixed = 5 total (was 4) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 91m 9s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}152m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.qjournal.server.TestJournalNodeSync | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12598 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890865/HDFS-12598-HDFS-7240.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2e4a5c0939bb 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / e75393c | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (HDFS-12616) Ozone: SCM: Open containers are not reused for block allocation after restart
[ https://issues.apache.org/jira/browse/HDFS-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195791#comment-16195791 ] Nandakumar commented on HDFS-12616: --- My understanding was that HDFS-12521 is for improving the performance by caching container info in {{ContainerMapping}} during startup, my bad. I'm ok with closing either one of them. > Ozone: SCM: Open containers are not reused for block allocation after restart > - > > Key: HDFS-12616 > URL: https://issues.apache.org/jira/browse/HDFS-12616 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > > When SCM is restarted, previously opened containers are not loaded by > {{ContainerStateManager}}. This causes creation of new container for > {{BlockManangerImpl#allocateBlock}} call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12598) Ozone: Fix 3 node ratis replication in Ozone
[ https://issues.apache.org/jira/browse/HDFS-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195787#comment-16195787 ] Tsz Wo Nicholas Sze commented on HDFS-12598: The new patch looks good. Some minor comments: - Let's use try-with-resource for the following: {code} //RatisManagerImpl XceiverClientRatis client = XceiverClientRatis.newXceiverClientRatis(pipeline, conf); client.createPipeline(pipeline.getPipelineName(), pipeline.getMachines()); client.close(); {code} i.e. {code} try(XceiverClientRatis client = XceiverClientRatis.newXceiverClientRatis(pipeline, conf)) { client.createPipeline(pipeline.getPipelineName(), pipeline.getMachines()); } {code} - We should not print info message in client code. How about change the below to debug or trace? {code} //XceiverClientRatis LOG.info("initializing pipeline:{} with nodes:{}", clusterId, newPeers); {code} > Ozone: Fix 3 node ratis replication in Ozone > > > Key: HDFS-12598 > URL: https://issues.apache.org/jira/browse/HDFS-12598 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12598-HDFS-7240.001.patch, > HDFS-12598-HDFS-7240.002.patch, HDFS-12598-HDFS-7240.003.patch > > > Enabling ratis 3 node replication on ozone currently fails because > XceiverServerRatis doesn't initialize correctly because the server id is not > initialized correctly. Uploading an initial patch for early feedback. Will > upload an updated patch after testing on the cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195786#comment-16195786 ] Nandakumar commented on HDFS-12519: --- Thanks for the review [~linyiqun]. bq. line64 and line107: Can use {{Time.now()}} to replace of {{System.currentTimeMillis()}}? Used {{Time.monotonicNow()}} instead of {{Time.now()}}, according to Times.now() java doc {code} * Current system time. Do not use this to calculate a duration or interval * to sleep, because it will be broken by settimeofday. Instead, use * monotonicNow. {code} bq. line32: Log instance seems get incorrect. This was done intentionally, so that the logging will be done as {{Lease}}. Please let me know if it's misleading, I'm ok with changing it too. bq. line65, line70 line154, line193: should add the check {{LOG.isDebug()}} before invoking {{LOG.debug("")}}; done. bq. line152: the running flag {{isRunning}} doesn't set to false when shutting down lease manager. Thanks for the catch, fixed. bq. Why not use ScheduledExecutorService and make this as a periodical task? Periodic execution will not help us here. Consider the following cases: 1. Service 'A' wants to use LeaseManager with timeout of 60 seconds 2. Service 'B' wants to use LeaseManager with timeout of 10 minutes We can't decide a proper interval which can suit for all the cases. We can add a parameter in LeaseManager constructor to get lease check interval, but this will make the API complex for someone who is using it. Using ScheduledExecutorService will also bring in the complexity of two Threads executing {{LeaseMonitor#run}}, one is executed whenever a lease is expired and the other will be the periodic task. Also with ScheduledExecutorService we will be unnecessarily executing the monitor periodically even if LeaseManager is started and never used at all i.e. no lease is obtained. > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: OzonePostMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12519: -- Attachment: HDFS-12519-HDFS-7240.001.patch > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: OzonePostMerge > Attachments: HDFS-12519-HDFS-7240.000.patch, > HDFS-12519-HDFS-7240.001.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12593) Ozone: update Ratis to the latest snapshot
[ https://issues.apache.org/jira/browse/HDFS-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz Wo Nicholas Sze updated HDFS-12593: --- Attachment: HDFS-12593-HDFS-7240.20171008.patch [~msingh], thanks for testing the patch. The empty group probably should not use a null id. Here is a new patch. Could you test it? HDFS-12593-HDFS-7240.20171008.patch > Ozone: update Ratis to the latest snapshot > -- > > Key: HDFS-12593 > URL: https://issues.apache.org/jira/browse/HDFS-12593 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-12593-HDFS-7240.20171005.patch, > HDFS-12593-HDFS-7240.20171006.patch, HDFS-12593-HDFS-7240.20171008.patch > > > Apache Ratis has quite a few bug fixes in the latest snapshot (7a5c3ea). Let > update to it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12593) Ozone: update Ratis to the latest snapshot
[ https://issues.apache.org/jira/browse/HDFS-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195772#comment-16195772 ] Mukul Kumar Singh edited comment on HDFS-12593 at 10/7/17 4:02 PM: --- Thanks for the patch [~szetszwo], I tried running TestCorona locally and it fails with the following stack trace. This might be related to initialization of XceiverServerRatis changes in the patch. {code} 2017-10-07 21:22:15,119 [BP-1749301638-192.168.43.157-1507391532467 heartbeating to localhost/127.0.0.1:56736] WARN impl.RaftServerImpl (JavaUtils.java:attempt(129)) - FAILED newRaftServer attempt #1/5: java.lang.NullPointerException, sleep 500ms and then retry. java.lang.NullPointerException at org.apache.ratis.server.impl.ServerState.(ServerState.java:90) at org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103) at org.apache.ratis.server.impl.RaftServerProxy.initImpl(RaftServerProxy.java:69) at org.apache.ratis.server.impl.RaftServerProxy.(RaftServerProxy.java:65) at org.apache.ratis.server.impl.ServerImplUtils.lambda$newRaftServer$0(ServerImplUtils.java:39) at org.apache.ratis.util.JavaUtils.attempt(JavaUtils.java:123) at org.apache.ratis.server.impl.ServerImplUtils.newRaftServer(ServerImplUtils.java:38) at org.apache.ratis.server.RaftServer$Builder.build(RaftServer.java:70) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.(XceiverServerRatis.java:68) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.newXceiverServerRatis(XceiverServerRatis.java:132) at org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:119) at org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:77) at org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1592) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:409) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:783) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:286) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) at java.lang.Thread.run(Thread.java:745) {code} was (Author: msingh): Thanks for the patch [~szetszwo], I tried running TestCorona locally and it fails with the following stack trace. {code} 2017-10-07 21:22:15,119 [BP-1749301638-192.168.43.157-1507391532467 heartbeating to localhost/127.0.0.1:56736] WARN impl.RaftServerImpl (JavaUtils.java:attempt(129)) - FAILED newRaftServer attempt #1/5: java.lang.NullPointerException, sleep 500ms and then retry. java.lang.NullPointerException at org.apache.ratis.server.impl.ServerState.(ServerState.java:90) at org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103) at org.apache.ratis.server.impl.RaftServerProxy.initImpl(RaftServerProxy.java:69) at org.apache.ratis.server.impl.RaftServerProxy.(RaftServerProxy.java:65) at org.apache.ratis.server.impl.ServerImplUtils.lambda$newRaftServer$0(ServerImplUtils.java:39) at org.apache.ratis.util.JavaUtils.attempt(JavaUtils.java:123) at org.apache.ratis.server.impl.ServerImplUtils.newRaftServer(ServerImplUtils.java:38) at org.apache.ratis.server.RaftServer$Builder.build(RaftServer.java:70) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.(XceiverServerRatis.java:68) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.newXceiverServerRatis(XceiverServerRatis.java:132) at org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:119) at org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:77) at org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1592) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:409) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:783) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:286) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) at java.lang.Thread.run(Thread.java:745) {code} > Ozone: update Ratis to the latest snapshot > -- > > Key: HDFS-12593 > URL: https://issues.apache.org/jira/browse/HDFS-12593 > Project: Hadoop HDFS > Issue
[jira] [Commented] (HDFS-12593) Ozone: update Ratis to the latest snapshot
[ https://issues.apache.org/jira/browse/HDFS-12593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195772#comment-16195772 ] Mukul Kumar Singh commented on HDFS-12593: -- Thanks for the patch [~szetszwo], I tried running TestCorona locally and it fails with the following stack trace. {code} 2017-10-07 21:22:15,119 [BP-1749301638-192.168.43.157-1507391532467 heartbeating to localhost/127.0.0.1:56736] WARN impl.RaftServerImpl (JavaUtils.java:attempt(129)) - FAILED newRaftServer attempt #1/5: java.lang.NullPointerException, sleep 500ms and then retry. java.lang.NullPointerException at org.apache.ratis.server.impl.ServerState.(ServerState.java:90) at org.apache.ratis.server.impl.RaftServerImpl.(RaftServerImpl.java:103) at org.apache.ratis.server.impl.RaftServerProxy.initImpl(RaftServerProxy.java:69) at org.apache.ratis.server.impl.RaftServerProxy.(RaftServerProxy.java:65) at org.apache.ratis.server.impl.ServerImplUtils.lambda$newRaftServer$0(ServerImplUtils.java:39) at org.apache.ratis.util.JavaUtils.attempt(JavaUtils.java:123) at org.apache.ratis.server.impl.ServerImplUtils.newRaftServer(ServerImplUtils.java:38) at org.apache.ratis.server.RaftServer$Builder.build(RaftServer.java:70) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.(XceiverServerRatis.java:68) at org.apache.hadoop.ozone.container.common.transport.server.ratis.XceiverServerRatis.newXceiverServerRatis(XceiverServerRatis.java:132) at org.apache.hadoop.ozone.container.ozoneimpl.OzoneContainer.(OzoneContainer.java:119) at org.apache.hadoop.ozone.container.common.statemachine.DatanodeStateMachine.(DatanodeStateMachine.java:77) at org.apache.hadoop.hdfs.server.datanode.DataNode.bpRegistrationSucceeded(DataNode.java:1592) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.registrationSucceeded(BPOfferService.java:409) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.register(BPServiceActor.java:783) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:286) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816) at java.lang.Thread.run(Thread.java:745) {code} > Ozone: update Ratis to the latest snapshot > -- > > Key: HDFS-12593 > URL: https://issues.apache.org/jira/browse/HDFS-12593 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze > Attachments: HDFS-12593-HDFS-7240.20171005.patch, > HDFS-12593-HDFS-7240.20171006.patch > > > Apache Ratis has quite a few bug fixes in the latest snapshot (7a5c3ea). Let > update to it. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12617) Ozone: fix and reenable TestKeysRatis#testPutAndGetKeyWithDnRestart
Mukul Kumar Singh created HDFS-12617: Summary: Ozone: fix and reenable TestKeysRatis#testPutAndGetKeyWithDnRestart Key: HDFS-12617 URL: https://issues.apache.org/jira/browse/HDFS-12617 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Affects Versions: HDFS-7240 Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh Fix For: HDFS-7240 As part of HDFS-12598, pipeline allocation and re initialization with ratis was fixed. However TestKeysRatis#testPutAndGetKeyWithDnRestart currently fails occasionally. This jia will be used to track this issue and re-enable the test again -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12598) Ozone: Fix 3 node ratis replication in Ozone
[ https://issues.apache.org/jira/browse/HDFS-12598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-12598: - Attachment: HDFS-12598-HDFS-7240.003.patch [~anu][~szetszwo] Patch v3 address all the review comments. I have disabled the datanode restart test in TestKeysRatis as the test currently fails with datanode restarts, will raise a new jira to track that and re-enable the test later. > Ozone: Fix 3 node ratis replication in Ozone > > > Key: HDFS-12598 > URL: https://issues.apache.org/jira/browse/HDFS-12598 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-12598-HDFS-7240.001.patch, > HDFS-12598-HDFS-7240.002.patch, HDFS-12598-HDFS-7240.003.patch > > > Enabling ratis 3 node replication on ozone currently fails because > XceiverServerRatis doesn't initialize correctly because the server id is not > initialized correctly. Uploading an initial patch for early feedback. Will > upload an updated patch after testing on the cluster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12616) Ozone: SCM: Open containers are not reused for block allocation after restart
[ https://issues.apache.org/jira/browse/HDFS-12616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195718#comment-16195718 ] Anu Engineer commented on HDFS-12616: - I feel that this is a duplicate of HDFS-12521, but this is worded better. Perhaps close the other JIRA? > Ozone: SCM: Open containers are not reused for block allocation after restart > - > > Key: HDFS-12616 > URL: https://issues.apache.org/jira/browse/HDFS-12616 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar > > When SCM is restarted, previously opened containers are not loaded by > {{ContainerStateManager}}. This causes creation of new container for > {{BlockManangerImpl#allocateBlock}} call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12519) Ozone: Add a Lease Manager to SCM
[ https://issues.apache.org/jira/browse/HDFS-12519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195710#comment-16195710 ] Yiqun Lin commented on HDFS-12519: -- Thanks for working on this, [~nandakumar131]. The patch looks good overall. The review comments for some part of code: *Lease.java* line64 and line107: Can use {{Time.now()}} to replace of {{System.currentTimeMillis()}}? *LeaseCallbackExecutor.java* line32: Log instance seems get incorrect. *LeaseManager.java* line65, line70 line154, line193: should add the check {{LOG.isDebug()}} before invoking {{LOG.debug("");}} line152: the running flag {{isRunning}} doesn't set to false when shutting down lease manager. line218: Here we use the InterruptedException to detect the adding lease operation and then re-trigger run(). This logic looks tricky. Why not use ScheduledExecutorService and make this as a periodical task? > Ozone: Add a Lease Manager to SCM > - > > Key: HDFS-12519 > URL: https://issues.apache.org/jira/browse/HDFS-12519 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Nandakumar > Labels: OzonePostMerge > Attachments: HDFS-12519-HDFS-7240.000.patch > > > Many objects, including Containers and pipelines can time out during creating > process. We need a way to track these timeouts. This lease Manager allows SCM > to hold a lease on these objects and helps SCM timeout waiting for creating > of these objects. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12616) Ozone: SCM: Open containers are not reused for block allocation after restart
Nandakumar created HDFS-12616: - Summary: Ozone: SCM: Open containers are not reused for block allocation after restart Key: HDFS-12616 URL: https://issues.apache.org/jira/browse/HDFS-12616 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Nandakumar Assignee: Nandakumar When SCM is restarted, previously opened containers are not loaded by {{ContainerStateManager}}. This causes creation of new container for {{BlockManangerImpl#allocateBlock}} call. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12537) Ozone: Reduce key creation overhead in Corona
[ https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12537: -- Resolution: Fixed Status: Resolved (was: Patch Available) > Ozone: Reduce key creation overhead in Corona > - > > Key: HDFS-12537 > URL: https://issues.apache.org/jira/browse/HDFS-12537 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain > Attachments: HDFS-12537-HDFS-7240.001.patch, > HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch, > HDFS-12537-HDFS-7240.004.patch, HDFS-12537-HDFS-7240.005.patch, > HDFS-12537-HDFS-7240.006.patch > > > Currently Corona creates random key values for each key. This creates a lot > of overhead. An option should be provided to use a single key value. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12537) Ozone: Reduce key creation overhead in Corona
[ https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195627#comment-16195627 ] Nandakumar commented on HDFS-12537: --- Thanks [~ljain] for the contribution and [~anu] for the review. I have committed the patch to feature branch. > Ozone: Reduce key creation overhead in Corona > - > > Key: HDFS-12537 > URL: https://issues.apache.org/jira/browse/HDFS-12537 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain > Attachments: HDFS-12537-HDFS-7240.001.patch, > HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch, > HDFS-12537-HDFS-7240.004.patch, HDFS-12537-HDFS-7240.005.patch, > HDFS-12537-HDFS-7240.006.patch > > > Currently Corona creates random key values for each key. This creates a lot > of overhead. An option should be provided to use a single key value. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12537) Ozone: Reduce key creation overhead in Corona
[ https://issues.apache.org/jira/browse/HDFS-12537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195625#comment-16195625 ] Nandakumar commented on HDFS-12537: --- +1 on the latest patch, I will commit this shortly > Ozone: Reduce key creation overhead in Corona > - > > Key: HDFS-12537 > URL: https://issues.apache.org/jira/browse/HDFS-12537 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain > Attachments: HDFS-12537-HDFS-7240.001.patch, > HDFS-12537-HDFS-7240.002.patch, HDFS-12537-HDFS-7240.003.patch, > HDFS-12537-HDFS-7240.004.patch, HDFS-12537-HDFS-7240.005.patch, > HDFS-12537-HDFS-7240.006.patch > > > Currently Corona creates random key values for each key. This creates a lot > of overhead. An option should be provided to use a single key value. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195610#comment-16195610 ] Hadoop QA commented on HDFS-12553: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 39s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 16 new + 296 unchanged - 16 fixed = 312 total (was 312) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 88m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestPipelinesFailover | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12553 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12890852/HDFS-12553.07.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 9c443b4ce9be 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5d63a38 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21582/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21582/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results |
[jira] [Commented] (HDFS-12494) libhdfs SIGSEGV in setTLSExceptionStrings
[ https://issues.apache.org/jira/browse/HDFS-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195598#comment-16195598 ] Hudson commented on HDFS-12494: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13046 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13046/]) HDFS-12494. libhdfs SIGSEGV in setTLSExceptionStrings. Contributed by (jzhuge: rev 2856eb207bfb206f22a6266f42cad0257083ab94) * (edit) hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfs/jni_helper.c > libhdfs SIGSEGV in setTLSExceptionStrings > - > > Key: HDFS-12494 > URL: https://issues.apache.org/jira/browse/HDFS-12494 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.0.0, 3.1.0 > > Attachments: HDFS-12494.001.patch > > > libhdfs application crashes when CLASSPATH is set but not set properly. It > uses wildcard in this case. > {noformat} > $ export CLASSPATH=$(hadoop classpath) > $ pwd > /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native > $ ./test_libhdfs_ops > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x0001052968f7, pid=14147, tid=775 > # > # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build > 1.7.0_79-b15) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode bsd-amd64 > compressed oops) > # Problematic frame: > # C [libhdfs.0.0.0.dylib+0x38f7] setTLSExceptionStrings+0x47 > # > # Core dump written. Default location: /cores/core or core.14147 > # > # An error report file with more information is saved as: > # > /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14147.log > # > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > Abort trap: 6 (core dumped) > [jzhuge@jzhuge-MBP native]((be32925fff5...) *+)$ lldb -c /cores/core.14147 > (lldb) target create --core "/cores/core.14147" > warning: (x86_64) /cores/core.14147 load command 549 LC_SEGMENT_64 has a > fileoff + filesize (0x14627f000) that extends beyond the end of the file > (0x14627e000), the segment will be truncated to match > warning: (x86_64) /cores/core.14147 load command 550 LC_SEGMENT_64 has a > fileoff (0x14627f000) that extends beyond the end of the file (0x14627e000), > ignoring this section > Core file '/cores/core.14147' (x86_64) was loaded. > (lldb) bt > * thread #1, stop reason = signal SIGSTOP > * frame #0: 0x7fffcf89ad42 libsystem_kernel.dylib`__pthread_kill + 10 > frame #1: 0x7fffcf988457 libsystem_pthread.dylib`pthread_kill + 90 > frame #2: 0x7fffcf800420 libsystem_c.dylib`abort + 129 > frame #3: 0x0001056cd5fb libjvm.dylib`os::abort(bool) + 25 > frame #4: 0x0001057d98fc libjvm.dylib`VMError::report_and_die() + 2308 > frame #5: 0x0001056cefb5 libjvm.dylib`JVM_handle_bsd_signal + 1083 > frame #6: 0x7fffcf97bb3a libsystem_platform.dylib`_sigtramp + 26 > frame #7: 0x0001052968f8 > libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, > stackTrace=0x) at jni_helper.c:589 [opt] > frame #8: 0x0001052954f0 > libhdfs.0.0.0.dylib`printExceptionAndFreeV(env=0x7ffaff0019e8, > exc=0x7ffafec04140, noPrintFlags=, fmt="loadFileSystems", > ap=) at exception.c:183 [opt] > frame #9: 0x0001052956bb > libhdfs.0.0.0.dylib`printExceptionAndFree(env=, > exc=, noPrintFlags=, fmt=) at > exception.c:213 [opt] > frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] > getGlobalJNIEnv at jni_helper.c:463 [opt] > frame #11: 0x00010529664f libhdfs.0.0.0.dylib`getJNIEnv at > jni_helper.c:528 [opt] > frame #12: 0x0001052975eb > libhdfs.0.0.0.dylib`hdfsBuilderConnect(bld=0x7ffafed0) at hdfs.c:693 > [opt] > frame #13: 0x00010528be30 test_libhdfs_ops`main(argc=, > argv=) at test_libhdfs_ops.c:91 [opt] > frame #14: 0x7fffcf76c235 libdyld.dylib`start + 1 > (lldb) f 10 > libhdfs.0.0.0.dylib was compiled with optimization - stepping may behave > oddly; variables may not be available. > frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] > getGlobalJNIEnv at jni_helper.c:463 [opt] >460 "org/apache/hadoop/fs/FileSystem", >461 "loadFileSystems", "()V"); >462if (jthr) { > -> 463printExceptionAndFree(env, jthr, PRINT_EXC_ALL, > "loadFileSystems"); >464} >465} >
[jira] [Commented] (HDFS-12596) Add TestFsck#testFsckCorruptWhenOneReplicaIsCorrupt back to branch-2.7
[ https://issues.apache.org/jira/browse/HDFS-12596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16195597#comment-16195597 ] Hadoop QA commented on HDFS-12596: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} branch-2.7 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 1s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 22s{color} | {color:green} branch-2.7 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} branch-2.7 passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} branch-2.7 passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 60 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}656m 42s{color} | {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_151. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 18m 12s{color} | {color:red} The patch generated 167 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}762m 45s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | JDK v1.8.0_144 Failed junit tests | hadoop.hdfs.server.namenode.TestDeadDatanode | | | hadoop.hdfs.server.namenode.TestCacheDirectives | | | hadoop.hdfs.server.datanode.TestBlockScanner | | | hadoop.hdfs.web.TestWebHdfsTokens | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | JDK v1.7.0_151 Failed junit tests | hadoop.hdfs.server.namenode.TestLeaseManager | | | hadoop.hdfs.server.datanode.TestDataNodeInitStorage | | | hadoop.hdfs.security.TestDelegationTokenForProxyUser | | | hadoop.hdfs.server.namenode.ha.TestStandbyBlockManagement | | | hadoop.hdfs.tools.TestDFSAdminWithHA | | | hadoop.hdfs.server.datanode.TestBPOfferService | | |
[jira] [Updated] (HDFS-12494) libhdfs SIGSEGV in setTLSExceptionStrings
[ https://issues.apache.org/jira/browse/HDFS-12494?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HDFS-12494: -- Resolution: Fixed Fix Version/s: 3.1.0 3.0.0 Status: Resolved (was: Patch Available) Committed to trunk and branch-3.0. Thanks [~sailesh] for the review! > libhdfs SIGSEGV in setTLSExceptionStrings > - > > Key: HDFS-12494 > URL: https://issues.apache.org/jira/browse/HDFS-12494 > Project: Hadoop HDFS > Issue Type: Bug > Components: libhdfs >Affects Versions: 3.0.0-alpha4 >Reporter: John Zhuge >Assignee: John Zhuge > Fix For: 3.0.0, 3.1.0 > > Attachments: HDFS-12494.001.patch > > > libhdfs application crashes when CLASSPATH is set but not set properly. It > uses wildcard in this case. > {noformat} > $ export CLASSPATH=$(hadoop classpath) > $ pwd > /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native > $ ./test_libhdfs_ops > # > # A fatal error has been detected by the Java Runtime Environment: > # > # SIGSEGV (0xb) at pc=0x0001052968f7, pid=14147, tid=775 > # > # JRE version: Java(TM) SE Runtime Environment (7.0_79-b15) (build > 1.7.0_79-b15) > # Java VM: Java HotSpot(TM) 64-Bit Server VM (24.79-b02 mixed mode bsd-amd64 > compressed oops) > # Problematic frame: > # C [libhdfs.0.0.0.dylib+0x38f7] setTLSExceptionStrings+0x47 > # > # Core dump written. Default location: /cores/core or core.14147 > # > # An error report file with more information is saved as: > # > /Users/jzhuge/hadoop2/hadoop-hdfs-project/hadoop-hdfs/target/native/hs_err_pid14147.log > # > # > # If you would like to submit a bug report, please visit: > # http://bugreport.java.com/bugreport/crash.jsp > # The crash happened outside the Java Virtual Machine in native code. > # See problematic frame for where to report the bug. > # > Abort trap: 6 (core dumped) > [jzhuge@jzhuge-MBP native]((be32925fff5...) *+)$ lldb -c /cores/core.14147 > (lldb) target create --core "/cores/core.14147" > warning: (x86_64) /cores/core.14147 load command 549 LC_SEGMENT_64 has a > fileoff + filesize (0x14627f000) that extends beyond the end of the file > (0x14627e000), the segment will be truncated to match > warning: (x86_64) /cores/core.14147 load command 550 LC_SEGMENT_64 has a > fileoff (0x14627f000) that extends beyond the end of the file (0x14627e000), > ignoring this section > Core file '/cores/core.14147' (x86_64) was loaded. > (lldb) bt > * thread #1, stop reason = signal SIGSTOP > * frame #0: 0x7fffcf89ad42 libsystem_kernel.dylib`__pthread_kill + 10 > frame #1: 0x7fffcf988457 libsystem_pthread.dylib`pthread_kill + 90 > frame #2: 0x7fffcf800420 libsystem_c.dylib`abort + 129 > frame #3: 0x0001056cd5fb libjvm.dylib`os::abort(bool) + 25 > frame #4: 0x0001057d98fc libjvm.dylib`VMError::report_and_die() + 2308 > frame #5: 0x0001056cefb5 libjvm.dylib`JVM_handle_bsd_signal + 1083 > frame #6: 0x7fffcf97bb3a libsystem_platform.dylib`_sigtramp + 26 > frame #7: 0x0001052968f8 > libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, > stackTrace=0x) at jni_helper.c:589 [opt] > frame #8: 0x0001052954f0 > libhdfs.0.0.0.dylib`printExceptionAndFreeV(env=0x7ffaff0019e8, > exc=0x7ffafec04140, noPrintFlags=, fmt="loadFileSystems", > ap=) at exception.c:183 [opt] > frame #9: 0x0001052956bb > libhdfs.0.0.0.dylib`printExceptionAndFree(env=, > exc=, noPrintFlags=, fmt=) at > exception.c:213 [opt] > frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] > getGlobalJNIEnv at jni_helper.c:463 [opt] > frame #11: 0x00010529664f libhdfs.0.0.0.dylib`getJNIEnv at > jni_helper.c:528 [opt] > frame #12: 0x0001052975eb > libhdfs.0.0.0.dylib`hdfsBuilderConnect(bld=0x7ffafed0) at hdfs.c:693 > [opt] > frame #13: 0x00010528be30 test_libhdfs_ops`main(argc=, > argv=) at test_libhdfs_ops.c:91 [opt] > frame #14: 0x7fffcf76c235 libdyld.dylib`start + 1 > (lldb) f 10 > libhdfs.0.0.0.dylib was compiled with optimization - stepping may behave > oddly; variables may not be available. > frame #10: 0x0001052967f4 libhdfs.0.0.0.dylib`getJNIEnv [inlined] > getGlobalJNIEnv at jni_helper.c:463 [opt] >460 "org/apache/hadoop/fs/FileSystem", >461 "loadFileSystems", "()V"); >462if (jthr) { > -> 463printExceptionAndFree(env, jthr, PRINT_EXC_ALL, > "loadFileSystems"); >464} >465} >466else { > (lldb) f 7 > frame #7: 0x0001052968f8 > libhdfs.0.0.0.dylib`setTLSExceptionStrings(rootCause=0x, > stackTrace=0x) at
[jira] [Updated] (HDFS-12553) Add nameServiceId to QJournalProtocol
[ https://issues.apache.org/jira/browse/HDFS-12553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12553: -- Attachment: HDFS-12553.07.patch > Add nameServiceId to QJournalProtocol > - > > Key: HDFS-12553 > URL: https://issues.apache.org/jira/browse/HDFS-12553 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham > Attachments: HDFS-12553.01.patch, HDFS-12553.02.patch, > HDFS-12553.03.patch, HDFS-12553.04.patch, HDFS-12553.05.patch, > HDFS-12553.06.patch, HDFS-12553.07.patch > > > Add namServiceId to QjournalProtocol. > This is used during federated + HA setup to find journalnodes belonging to a > nameservice. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org