[jira] [Updated] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11873: -- Attachment: HDFS-11873-HDFS-7240.003.patch > Ozone: Object store handler cannot serve multiple requests from single http > client > -- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-11873-HDFS-7240.001.patch, > HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, > HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168816#comment-16168816 ] Xiaoyu Yao commented on HDFS-11873: --- I think it is caused by some uncleaned setting of cluster configuration for local storage root {{conf.setBoolean(OzoneConfigKeys.OZONE_LOCALSTORAGE_ROOT, true);}} for local handler. Update patch v3 to fix the checkstyle and the asf license issue due to the wrong test root created and left over after the test. > Ozone: Object store handler cannot serve multiple requests from single http > client > -- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-11873-HDFS-7240.001.patch, > HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.003.patch, > HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
[ https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wenxin He updated HDFS-12375: - Attachment: hdfs-site.xml bq. dfs.namenode.shared.edits.dir.mycluster or dfs.namenode.shared.edits.dir=qjournal://y124.l42scl.hortonworks.com:8485/mycluster ? The former. In my situation, [^hdfs-site.xml] {noformat} dfs.namenode.shared.edits.dir.ns1 qjournal://hadoop3-0-0-01:8485;hadoop3-0-0-02:8485;hadoop3-0-0-03:8485;/ns1 {noformat} {noformat} root@hadoop3-0-0-01:/opt/hadoop# sbin/stop-dfs.sh root@hadoop3-0-0-01:/opt/hadoop# jps 11648 Jps 2625 NodeManager 2299 ResourceManager 583 JournalNode {noformat} The JournalNode is still alive after sbin/stop-dfs.sh. I have a investigate, maybe it's line 103 in sbin/stop-dfs.sh does not get the value in this situation. sbin/start-dfs.sh has the same problem. {noformat} 100 #- 101 # quorumjournal nodes (if any) 102 103 SHARED_EDITS_DIR=$("${HADOOP_HDFS_HOME}/bin/hdfs" getconf -confKey dfs.namenode.shared.edits.dir 2>&-) 104 105 case "${SHARED_EDITS_DIR}" in 106 qjournal://*) {noformat} > Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. > --- > > Key: HDFS-12375 > URL: https://issues.apache.org/jira/browse/HDFS-12375 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, scripts >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Assignee: Bharat Viswanadham > Attachments: hdfs-site.xml > > > When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding > NameServiceID, we can not start/stop journalnodes using > start-dfs.sh/stop-dfs.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168809#comment-16168809 ] Weiwei Yang commented on HDFS-12454: Hi [~vagarychen] Restful clients don't talk to KSM directly, when a client runs command {{hdfs oz -listtBucket http://localhost:9864/hive}}, this call routes to datanode http server and eventually handled by {{ObjectStoreJerseyContainer}} which is running on {{DatanodeHttpServer}}. That's why clients are talking to {{9864}} port, that is {{DFS_DATANODE_HTTP_DEFAULT_PORT}}. So this part, it is correct. The point I suggested to ensure {{ozone.ksm.address}} is present was because, this is the place where user to specify which host to run KSM service. If not properly configured, it will cause problems if user wants to start KSM by script {{start-ozone.sh}} on a multi-nodes environment. I agree with you that it is not necessary to enforce user to configure KSM RPC ports, this property might allow us to configure just host names? (if port is not explicitly set, it takes default). Maybe we should make SCM and KSM configuration similar, SCM supports to set {{ozone.scm.names}}, probably we should let KSM support {{ozone.ksm.names}} as well? Thank you for following up on this. We need this ticket to sort out all these things and expose most necessary and clean configs to end users. Appreciate. > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-12454-HDFS-7240.001.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11909) Ozone: KSM : Support for simulated file system operations
[ https://issues.apache.org/jira/browse/HDFS-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh reassigned HDFS-11909: Assignee: Mukul Kumar Singh > Ozone: KSM : Support for simulated file system operations > -- > > Key: HDFS-11909 > URL: https://issues.apache.org/jira/browse/HDFS-11909 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Anu Engineer >Assignee: Mukul Kumar Singh > Labels: OzonePostMerge > Attachments: simulation-file-system.pdf > > > This JIRA adds a proposal that makes it easy to implement OzoneFileSystem. > This allows the directory and file list operations simpler. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12476) TestCopyFromLocal.testCopyFromLocalWithThreads fails intermittently
Mukul Kumar Singh created HDFS-12476: Summary: TestCopyFromLocal.testCopyFromLocalWithThreads fails intermittently Key: HDFS-12476 URL: https://issues.apache.org/jira/browse/HDFS-12476 Project: Hadoop HDFS Issue Type: Bug Components: tools Reporter: Mukul Kumar Singh Assignee: Mukul Kumar Singh TestCopyFromLocal.testCopyFromLocalWithThreads fails intermittently. {code} java.lang.AssertionError: expected:<0> but was:<8> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.fs.shell.TestCopyFromLocal$TestMultiThreadedCopy.processArguments(TestCopyFromLocal.java:167) at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:119) at org.apache.hadoop.fs.shell.Command.run(Command.java:176) at org.apache.hadoop.fs.shell.TestCopyFromLocal.run(TestCopyFromLocal.java:101) at org.apache.hadoop.fs.shell.TestCopyFromLocal.testCopyFromLocalWithThreads(TestCopyFromLocal.java:121) {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168801#comment-16168801 ] Weiwei Yang commented on HDFS-11873: Hi [~xyao] Thanks for fixing this, this should have fixed issue. I am not familiar with netty stuff, it might be good to have someone else to review this patch. One thing small, when I run the test case, it seems to create following dirs/files under {{~/hadoop/hadoop-hdfs-project/hadoop-hdfs}} {noformat} ls -R true/ _objects/metadata.db/ user.db/ true//_objects: true//metadata.db: 03.log CURRENT IDENTITY LOCK LOG MANIFEST-01 OPTIONS-05 true//user.db: 03.log CURRENT IDENTITY LOCK LOG MANIFEST-01 OPTIONS-05 {noformat} any idea why? > Ozone: Object store handler cannot serve multiple requests from single http > client > -- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-11873-HDFS-7240.001.patch, > HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode
[ https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168800#comment-16168800 ] Nandakumar commented on HDFS-12467: --- Thanks [~vagarychen] for the review. The idea here is to log when SCM comes out of chill mode during normal startup, not through manual exit of chill mode. The reason for having two flags {{chillMode}} and {{inManualChillMode}} is to differentiate the status of chill mode (chill mode during startup and manual chill mode). I have handled the scenario you have explained in the patch v001, please review and let me know if any case/scenario is missed. > Ozone: SCM: NodeManager should log when it comes out of chill mode > -- > > Key: HDFS-12467 > URL: https://issues.apache.org/jira/browse/HDFS-12467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12467-HDFS-7240.000.patch, > HDFS-12467-HDFS-7240.001.patch > > > {{NodeManager}} should add a log message when it comes out of chill mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode
[ https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nandakumar updated HDFS-12467: -- Attachment: HDFS-12467-HDFS-7240.001.patch > Ozone: SCM: NodeManager should log when it comes out of chill mode > -- > > Key: HDFS-12467 > URL: https://issues.apache.org/jira/browse/HDFS-12467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12467-HDFS-7240.000.patch, > HDFS-12467-HDFS-7240.001.patch > > > {{NodeManager}} should add a log message when it comes out of chill mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168795#comment-16168795 ] Hadoop QA commented on HDFS-12473: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-hdfs-project: The patch generated 3 new + 7 unchanged - 0 fixed = 10 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}118m 33s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}156m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.TestDecommissionWithStriped | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.namenode.TestReconstructStripedBlocks | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12473 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887466/HDFS-12473-2.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 4509f7b5d268 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef8cd5d | | Default Java | 1.8.0_144 | |
[jira] [Commented] (HDFS-12471) Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG
[ https://issues.apache.org/jira/browse/HDFS-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168791#comment-16168791 ] Weiwei Yang commented on HDFS-12471: Hi [~xyao], thanks for creating this jira, actually this is on my to-do list, will use this one to track. Thank you. > Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG > --- > > Key: HDFS-12471 > URL: https://issues.apache.org/jira/browse/HDFS-12471 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Weiwei Yang > > Looks like we are logging a few no-op messages every minute in KSM/SCM log. > Should we reduce the log level to DEBUG or TRACE? cc: [~anu],[~cheersyang], > [~yuanbo]. > {code} > 2017-09-14 23:42:15,022 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:103) - Running > DeletedBlockTransactionScanner > 2017-09-14 23:42:15,024 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 > delTX to process > 2017-09-14 23:42:24,139 [KeyDeletingService#1] INFO > (KeyDeletingService.java:123) - No pending deletion key found in KSM > 2017-09-14 23:43:09,377 [BlockDeletingService#2] INFO > (BlockDeletingService.java:109) - Plan to choose 10 containers for block > deletion, actually returns 0 valid containers. > 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:103) - Running > DeletedBlockTransactionScanner > 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 > delTX to process > 2017-09-14 23:43:24,146 [KeyDeletingService#1] INFO > (KeyDeletingService.java:123) - No pending deletion key found in KSM > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12471) Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG
[ https://issues.apache.org/jira/browse/HDFS-12471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang reassigned HDFS-12471: -- Assignee: Weiwei Yang > Ozone: Reduce some KSM/SCM deletion log messages from INFO to DEBUG > --- > > Key: HDFS-12471 > URL: https://issues.apache.org/jira/browse/HDFS-12471 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiaoyu Yao >Assignee: Weiwei Yang > > Looks like we are logging a few no-op messages every minute in KSM/SCM log. > Should we reduce the log level to DEBUG or TRACE? cc: [~anu],[~cheersyang], > [~yuanbo]. > {code} > 2017-09-14 23:42:15,022 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:103) - Running > DeletedBlockTransactionScanner > 2017-09-14 23:42:15,024 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 > delTX to process > 2017-09-14 23:42:24,139 [KeyDeletingService#1] INFO > (KeyDeletingService.java:123) - No pending deletion key found in KSM > 2017-09-14 23:43:09,377 [BlockDeletingService#2] INFO > (BlockDeletingService.java:109) - Plan to choose 10 containers for block > deletion, actually returns 0 valid containers. > 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:103) - Running > DeletedBlockTransactionScanner > 2017-09-14 23:43:15,027 [SCMBlockDeletingService#0] INFO > (SCMBlockDeletingService.java:136) - Scanned deleted blocks log and got 0 > delTX to process > 2017-09-14 23:43:24,146 [KeyDeletingService#1] INFO > (KeyDeletingService.java:123) - No pending deletion key found in KSM > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12460) make addErasureCodingPolicy an idempotent operation
[ https://issues.apache.org/jira/browse/HDFS-12460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168766#comment-16168766 ] Hadoop QA commented on HDFS-12460: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 3s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 22s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.TestDecommission | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestFileCorruption | | | hadoop.hdfs.server.namenode.ha.TestHAAppend | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestLease | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12460 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887284/HDFS-12460.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 846dd094b3c3 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef8cd5d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21180/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21180/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21180/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > make addErasureCodingPolicy an idempotent operation >
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168762#comment-16168762 ] Hadoop QA commented on HDFS-12381: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 2m 56s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 13s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 96m 10s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}120m 54s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestLeaseRecoveryStriped | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12381 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887455/HDFS-12381-HDFS-10467.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux a02b1f7279a6 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10467 / 2d490d3 | | Default Java | 1.8.0_144 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21179/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21179/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21179/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch > > >
[jira] [Commented] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168755#comment-16168755 ] Hadoop QA commented on HDFS-12437: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 46s{color} | {color:red} hadoop-hdfs-project_hadoop-hdfs generated 3 new + 408 unchanged - 0 fixed = 411 total (was 408) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 6 unchanged - 1 fixed = 6 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}101m 37s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 0s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure180 | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReconstructStripedFile | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12437 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887452/HDFS-12437.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cc15c8b9502c 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ef8cd5d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | javac | https://builds.apache.org/job/PreCommit-HDFS-Build/21178/artifact/patchprocess/diff-compile-javac-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21178/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21178/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U:
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168757#comment-16168757 ] Chris Douglas commented on HDFS-12273: -- bq. one thing is missing is the WebHDFS interface. Do you guys think it should go in HDFS-10467? That could go in after the merge pretty easily. Certainly not a prerequisite. [~raviprak], if it's not too traumatic for you to review/relive the front-end and security bits, then we can wait on your review. Otherwise we make a concerted effort to harden this code later, after HDFS-12284 is in. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-12473: --- Attachment: HDFS-12473-2.patch Thanks Manoj. Here is the updated patch to address your comments. bq. What happens when the hosts file has improper json format? I was hoping we can get it before 3.0 beta release thus without worrying about compatibility issue. But it looks like upgrade domain feature has been backported to 2.8.2. Unfortunately that means we have to support the old format. bq. #readFile can now return null object The updated patch will return empty array instead. bq. If MAPPER is no more used, can be removed. It was removed. Maybe you referred to the existing file. bq. CombinedHostsFileReader.readFile() can return null if the input hosts file has no entries. test case testEmptyCombinedHostsFileReader > Change hosts JSON file format > - > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473-2.patch, HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168724#comment-16168724 ] Chris Douglas commented on HDFS-12381: -- bq. I added a note on security being work in progress. For user docs, might as well spell it out instead of linking into JIRA. I'd also move the notice closer to the top of the page, rather than the bottom. Something like: {noformat} Secure authentication and authorization are not supported yet, so the Router will not proxy to Hadoop clusters with security enabled. {noformat} > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168705#comment-16168705 ] Hadoop QA commented on HDFS-12472: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 94m 46s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure020 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12472 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887435/HDFS-12472.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 18ede19b9076 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 90894c7 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21176/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21176/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21176/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168701#comment-16168701 ] Hadoop QA commented on HDFS-12472: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}114m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.blockmanagement.TestRBWBlockInvalidation | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestAclsEndToEnd | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12472 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887432/HDFS-12472.00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux fed6e77ce54e 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 90894c7 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21175/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21175/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x
[ https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168686#comment-16168686 ] Allen Wittenauer commented on HDFS-11096: - bq. how do you feel about set -v? It's usually overkill. It typically means that the code isn't giving enough hints about what it's doing or that it lacks a --debug option to tell me about important events if something is going wrong and I need in depth help. bq. And did you mean set +e? No, I meant setting 'set -e' as was in the code. The default for bash is set +e. bq. I feel like if it's okay for a command to sometimes fail, you can deal with that return code explicitly, otherwise I'd like that failure to bubble up. Am I missing something? set -e will exit bash with no chance to deal with the failure. That's disastrous if the code actually knows how to exit out gracefully or has workarounds or multiple ways to try something or wants to use boolean result codes. bq. Not sure I understand exactly what hadoop_actual_ssh was supposed to be doing before, but it's not used elsewhere and is marked as private hadoop_actual_ssh is called from hadoop_connect_to_hosts_without_pdsh. That code was originally written with xargs, but it wasn't in a POSIX compatible way so the xargs implementation got removed. But when it was written with xargs, the problem was one can't call a function as a parameter. So ... other ways were invented. You can see the original xargs implementation in hadoop-user-functions.sh.example to see how I worked around it. That said... I have a hunch that your changes probably break hadoop/hdfs/... --workers flag either completely or just the output when pdsh isn't installed. An easier entry point is likely to be hadoop_connect_to_hosts which can take advantage of pdsh and do things in parallel. Just need to set the appropriate HADOOP_WORKERS var. bq. $(dirname $ {0}) That's shellcheck hinting that something may be wrong... ;) I generally go for something like: {code} this="${BASH_SOURCE-$0}" BINDIR=$(cd -P -- "$(dirname -- "${this}")" >/dev/null && pwd -P) {code} This fixes some potential issues: * deals with most of the ridiculous aliasing that people tend to with cd * leading dashes for directory names aren't a problem * gives an absolute, symlink resolved path * and one of my absolute favorite obscure bugs: bash foo.sh now works if foo.sh is in the path {code} $ cat ~/bin/foo.sh echo "0: $0" echo "BS0: ${BASH_SOURCE-$0}" $ foo.sh 0: /Users/aw/bin/foo.sh BS0: /Users/aw/bin/foo.sh $ bash foo.sh 0: foo.sh BS0: /Users/aw/bin/foo.sh {code} bq. Switch to using create-release - --native wasn't working because the Docker image doesn't have a high enough version of cmake Rebase required? Or is this some other Dockerfile? Because otherwise precommit would be failing... > Support rolling upgrade between 2.x and 3.x > --- > > Key: HDFS-11096 > URL: https://issues.apache.org/jira/browse/HDFS-11096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rolling upgrades >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Sean Mackrory >Priority: Blocker > Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, > HDFS-11096.003.patch > > > trunk has a minimum software version of 3.0.0-alpha1. This means we can't > rolling upgrade between branch-2 and trunk. > This is a showstopper for large deployments. Unless there are very compelling > reasons to break compatibility, let's restore the ability to rolling upgrade > to 3.x releases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168684#comment-16168684 ] Hudson commented on HDFS-10701: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12891 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12891/]) HDFS-10701. TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired (wang: rev ef8cd5dc565f901b4954befe784675e130e84c3c) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/TestDFSStripedOutputStreamWithFailure.java > TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails > -- > > Key: HDFS-10701 > URL: https://issues.apache.org/jira/browse/HDFS-10701 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Assignee: SammiChen > Labels: flaky-test > Fix For: 3.0.0-beta1 > > Attachments: HDFS-10701.000.patch, HDFS-10701.001.patch > > > I noticed this test failure in a recent precommit build, and I also found > this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. > But I do not have sufficient knowledge to tell if it's a flaky test or a bug > in the code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12371) "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX
[ https://issues.apache.org/jira/browse/HDFS-12371?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-12371: -- Attachment: HDFS-12371.001.patch > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > - > > Key: HDFS-12371 > URL: https://issues.apache.org/jira/browse/HDFS-12371 > Project: Hadoop HDFS > Issue Type: Bug > Components: metrics >Affects Versions: 2.7.1 >Reporter: Sai Nukavarapu >Assignee: Hanisha Koneru > Attachments: HDFS-12371.001.patch > > > "BlockVerificationFailures" and "BlocksVerified" show up as 0 in Datanode JMX > Looking at the code, i see below description. > {noformat} > `BlockVerificationFailures` | Total number of verifications failures | > `BlocksVerified` | Total number of blocks verified | > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168682#comment-16168682 ] Íñigo Goiri commented on HDFS-12381: Thanks for the comments [~brahmareddy] and [~manojg]. >From your comments, I realized that I wasn't talking much about the client, I >added a full section on that. At the same time, I think I covered most of your comments. Let me know if there is anything else you think should be documented. [~chris.douglas] I added a not on WIP for security. Does that seems reasonable as a placeholder until HDFS-12284 is complete? > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168682#comment-16168682 ] Íñigo Goiri edited comment on HDFS-12381 at 9/15/17 11:38 PM: -- Thanks for the comments [~brahmareddy] and [~manojg]. >From your comments, I realized that I wasn't talking much about the client, I >added a full section on that. At the same time, I think I covered most of your comments. Let me know if there is anything else you think should be documented. [~chris.douglas] I added a note on security being work in progress. Does that seems reasonable as a placeholder until HDFS-12284 is complete? was (Author: elgoiri): Thanks for the comments [~brahmareddy] and [~manojg]. >From your comments, I realized that I wasn't talking much about the client, I >added a full section on that. At the same time, I think I covered most of your comments. Let me know if there is anything else you think should be documented. [~chris.douglas] I added a not on WIP for security. Does that seems reasonable as a placeholder until HDFS-12284 is complete? > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12381: --- Attachment: HDFS-12381-HDFS-10467.002.patch > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch, HDFS-12381-HDFS-10467.002.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168677#comment-16168677 ] Hadoop QA commented on HDFS-11873: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 16s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 41s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 13s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s{color} | {color:red} The patch generated 10 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}133m 30s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestFileCorruption | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure110 | | | hadoop.ozone.web.client.TestKeys | | | hadoop.tools.TestHdfsConfigFields | | | hadoop.ozone.scm.TestContainerSQLCli | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure100 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure140 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure170 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure060 | | | hadoop.hdfs.TestFileAppendRestart | | | hadoop.fs.viewfs.TestViewFileSystemHdfs | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure040 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure050 | | | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure070 | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-11873 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887426/HDFS-11873-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7ba4c28efa01 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 8dbd035 | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | |
[jira] [Updated] (HDFS-12460) make addErasureCodingPolicy an idempotent operation
[ https://issues.apache.org/jira/browse/HDFS-12460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12460: --- Component/s: (was: caching) erasure-coding > make addErasureCodingPolicy an idempotent operation > --- > > Key: HDFS-12460 > URL: https://issues.apache.org/jira/browse/HDFS-12460 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12460.001.patch > > > Make addErasureCodingPolicy an idempotent operation to guarantee after HA > switch, addErasureCodingPolicy edit log can be applied smoothly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12460) make addErasureCodingPolicy an idempotent operation
[ https://issues.apache.org/jira/browse/HDFS-12460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12460: --- Status: Patch Available (was: Open) > make addErasureCodingPolicy an idempotent operation > --- > > Key: HDFS-12460 > URL: https://issues.apache.org/jira/browse/HDFS-12460 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Reporter: SammiChen >Assignee: SammiChen > Attachments: HDFS-12460.001.patch > > > Make addErasureCodingPolicy an idempotent operation to guarantee after HA > switch, addErasureCodingPolicy edit log can be applied smoothly. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11096) Support rolling upgrade between 2.x and 3.x
[ https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168674#comment-16168674 ] Hadoop QA commented on HDFS-11096: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue} 0m 6s{color} | {color:blue} Shelldocs was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} pylint {color} | {color:orange} 0m 8s{color} | {color:orange} The patch generated 422 new + 0 unchanged - 0 fixed = 422 total (was 0) {color} | | {color:red}-1{color} | {color:red} shellcheck {color} | {color:red} 0m 4s{color} | {color:red} The patch generated 20 new + 20 unchanged - 0 fixed = 40 total (was 20) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 9s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-11096 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887450/HDFS-11096.003.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs pylint | | uname | Linux 9700c0d44d30 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 1a84c24 | | shellcheck | v0.4.6 | | pylint | v1.7.2 | | pylint | https://builds.apache.org/job/PreCommit-HDFS-Build/21177/artifact/patchprocess/diff-patch-pylint.txt | | shellcheck | https://builds.apache.org/job/PreCommit-HDFS-Build/21177/artifact/patchprocess/diff-patch-shellcheck.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21177/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21177/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Support rolling upgrade between 2.x and 3.x > --- > > Key: HDFS-11096 > URL: https://issues.apache.org/jira/browse/HDFS-11096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rolling upgrades >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Sean Mackrory >Priority: Blocker > Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, > HDFS-11096.003.patch > > > trunk has a minimum software version of 3.0.0-alpha1. This means we can't > rolling upgrade between branch-2 and trunk. > This is a showstopper for large deployments. Unless there are very compelling > reasons to break compatibility, let's restore the ability to rolling upgrade > to 3.x releases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168672#comment-16168672 ] Hadoop QA commented on HDFS-12473: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 15s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-hdfs-project: The patch generated 7 new + 7 unchanged - 0 fixed = 14 total (was 7) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 28s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.TestClose | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestParallelUnixDomainRead | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReconstructStripedFile | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | | | org.apache.hadoop.hdfs.server.blockmanagement.TestBlockStatsMXBean | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12473 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887424/HDFS-12473.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b9d9b8a3674b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b9b607d | |
[jira] [Assigned] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile
[ https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reassigned HDFS-12444: -- Assignee: Huafeng Wang (was: Andrew Wang) > Reduce runtime of TestWriteReadStripedFile > -- > > Key: HDFS-12444 > URL: https://issues.apache.org/jira/browse/HDFS-12444 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, test >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12444.001.patch, HDFS-12444.002.patch, > HDFS-12444.003.patch > > > This test takes a long time to run since it writes a lot of data, and > frequently times out during precommit testing. If we change the EC policy > from RS(6,3) to RS(3,2) then it will run a lot faster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12444) Reduce runtime of TestWriteReadStripedFile
[ https://issues.apache.org/jira/browse/HDFS-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168670#comment-16168670 ] Andrew Wang commented on HDFS-12444: Assigning this to Huafeng since he seems to be working on it. > Reduce runtime of TestWriteReadStripedFile > -- > > Key: HDFS-12444 > URL: https://issues.apache.org/jira/browse/HDFS-12444 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding, test >Affects Versions: 3.0.0-alpha4 >Reporter: Andrew Wang >Assignee: Huafeng Wang > Attachments: HDFS-12444.001.patch, HDFS-12444.002.patch, > HDFS-12444.003.patch > > > This test takes a long time to run since it writes a lot of data, and > frequently times out during precommit testing. If we change the EC policy > from RS(6,3) to RS(3,2) then it will run a lot faster. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10701) TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails
[ https://issues.apache.org/jira/browse/HDFS-10701?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-10701: --- Resolution: Fixed Fix Version/s: 3.0.0-beta1 Status: Resolved (was: Patch Available) Thanks for working on this Sammi! I ran testBlockTokenExpired a few times locally and it passed. Committed to trunk and branch-3.0. > TestDFSStripedOutputStreamWithFailure#testBlockTokenExpired occasionally fails > -- > > Key: HDFS-10701 > URL: https://issues.apache.org/jira/browse/HDFS-10701 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding >Affects Versions: 3.0.0-alpha2 >Reporter: Wei-Chiu Chuang >Assignee: SammiChen > Labels: flaky-test > Fix For: 3.0.0-beta1 > > Attachments: HDFS-10701.000.patch, HDFS-10701.001.patch > > > I noticed this test failure in a recent precommit build, and I also found > this test had failed for a few times in Hadoop-Hdfs-trunk build in the past. > But I do not have sufficient knowledge to tell if it's a flaky test or a bug > in the code. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12437: --- Labels: flaky-test (was: ) > TestLeaseRecoveryStriped fails in trunk > --- > > Key: HDFS-12437 > URL: https://issues.apache.org/jira/browse/HDFS-12437 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Affects Versions: 3.0.0-beta1 >Reporter: Arpit Agarwal >Assignee: Andrew Wang > Labels: flaky-test > Attachments: HDFS-12437.001.patch, HDFS-12437.002.patch > > > Fails consistently for me in trunk with the following call stack. > {code} > TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, > blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, > 10485760, 11534336] > java.io.IOException: Failed: the number of failed blocks = 4 > the number of > parity blocks = 3 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565) > at > org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48) > at java.io.DataOutputStream.write(DataOutputStream.java:88) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12437: --- Component/s: test erasure-coding > TestLeaseRecoveryStriped fails in trunk > --- > > Key: HDFS-12437 > URL: https://issues.apache.org/jira/browse/HDFS-12437 > Project: Hadoop HDFS > Issue Type: Bug > Components: erasure-coding, test >Affects Versions: 3.0.0-beta1 >Reporter: Arpit Agarwal >Assignee: Andrew Wang > Labels: flaky-test > Attachments: HDFS-12437.001.patch, HDFS-12437.002.patch > > > Fails consistently for me in trunk with the following call stack. > {code} > TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, > blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, > 10485760, 11534336] > java.io.IOException: Failed: the number of failed blocks = 4 > the number of > parity blocks = 3 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565) > at > org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48) > at java.io.DataOutputStream.write(DataOutputStream.java:88) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HDFS-12437: --- Attachment: HDFS-12437.002.patch I think I tracked down the issue to not flushing out the remaining block streams at the end. I ran this 5 times locally and it worked. Arpit, could you do a review? > TestLeaseRecoveryStriped fails in trunk > --- > > Key: HDFS-12437 > URL: https://issues.apache.org/jira/browse/HDFS-12437 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: Arpit Agarwal >Assignee: Andrew Wang > Attachments: HDFS-12437.001.patch, HDFS-12437.002.patch > > > Fails consistently for me in trunk with the following call stack. > {code} > TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, > blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, > 10485760, 11534336] > java.io.IOException: Failed: the number of failed blocks = 4 > the number of > parity blocks = 3 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565) > at > org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48) > at java.io.DataOutputStream.write(DataOutputStream.java:88) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
[jira] [Assigned] (HDFS-12437) TestLeaseRecoveryStriped fails in trunk
[ https://issues.apache.org/jira/browse/HDFS-12437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang reassigned HDFS-12437: -- Assignee: Andrew Wang > TestLeaseRecoveryStriped fails in trunk > --- > > Key: HDFS-12437 > URL: https://issues.apache.org/jira/browse/HDFS-12437 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 3.0.0-beta1 >Reporter: Arpit Agarwal >Assignee: Andrew Wang > Attachments: HDFS-12437.001.patch > > > Fails consistently for me in trunk with the following call stack. > {code} > TestLeaseRecoveryStriped.testLeaseRecovery:152 failed testCase at i=0, > blockLengths=[5242880, 7340032, 5242880, 8388608, 7340032, 3145728, 9437184, > 10485760, 11534336] > java.io.IOException: Failed: the number of failed blocks = 4 > the number of > parity blocks = 3 > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamers(DFSStripedOutputStream.java:394) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.handleStreamerFailure(DFSStripedOutputStream.java:412) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.flushAllInternals(DFSStripedOutputStream.java:1264) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.checkStreamerFailures(DFSStripedOutputStream.java:629) > at > org.apache.hadoop.hdfs.DFSStripedOutputStream.writeChunk(DFSStripedOutputStream.java:565) > at > org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunks(FSOutputSummer.java:217) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:164) > at > org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:145) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:79) > at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:48) > at java.io.DataOutputStream.write(DataOutputStream.java:88) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.writePartialBlocks(TestLeaseRecoveryStriped.java:182) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.runTest(TestLeaseRecoveryStriped.java:158) > at > org.apache.hadoop.hdfs.TestLeaseRecoveryStriped.testLeaseRecovery(TestLeaseRecoveryStriped.java:147) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at > org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264) > at > org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153) > at > org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124) > at > org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200) > at > org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153) > at > org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103) > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11096) Support rolling upgrade between 2.x and 3.x
[ https://issues.apache.org/jira/browse/HDFS-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sean Mackrory updated HDFS-11096: - Attachment: HDFS-11096.003.patch So attaching a 3rd patch. YARN appears to be failing, so I need to debug this, but this should includes a lot of changes based on the feedback here and I doubt the fix will be significant. Most notably: * Augmented documentation to clear up any confusion about how to run the tests and what their main components are. * Removed any use of deprecated commands, and switched to 3.1.0-SNAPSHOT * Removed set -x. [~aw] - how do you feel about set -v? And did you mean set +e? I feel like if it's okay for a command to sometimes fail, you can deal with that return code explicitly, otherwise I'd like that failure to bubble up. Am I missing something? * Switched to using hadoop-functions and added what I needed there. Not sure I understand exactly what hadoop_actual_ssh was supposed to be doing before, but it's not used elsewhere and is marked as private, so I hope my change to it is okay. I redid the join / split functions to make shellcheck much happier (and I'm also much happier with the outcome) In addition to fixing whatever is going wrong with YARN, I may still: * Have a couple of shellcheck issues to fix. Like $(dirname ${0}) seems tricky quote correctly to shellcheck's satisfaction. * Add parameter checking as suggested by Ray * Eliminate the need for a git checkout or installing expecting with apt-get * Switch to using create-release - --native wasn't working because the Docker image doesn't have a high enough version of cmake > Support rolling upgrade between 2.x and 3.x > --- > > Key: HDFS-11096 > URL: https://issues.apache.org/jira/browse/HDFS-11096 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rolling upgrades >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Sean Mackrory >Priority: Blocker > Attachments: HDFS-11096.001.patch, HDFS-11096.002.patch, > HDFS-11096.003.patch > > > trunk has a minimum software version of 3.0.0-alpha1. This means we can't > rolling upgrade between branch-2 and trunk. > This is a showstopper for large deployments. Unless there are very compelling > reasons to break compatibility, let's restore the ability to rolling upgrade > to 3.x releases. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12420) Disable Namenode format for prod clusters when data already exists
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168658#comment-16168658 ] Hadoop QA commented on HDFS-12420: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 97m 47s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 8s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.namenode.TestUpgradeDomainBlockPlacementPolicy | | | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestLease | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12420 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887422/HDFS-12420.07.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 0d5f3a0f8e6f 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / b9b607d | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21172/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21172/testReport/ | | modules | C:
[jira] [Updated] (HDFS-12473) Change hosts JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Manoj Govindassamy updated HDFS-12473: -- Summary: Change hosts JSON file format (was: Change host JSON file format) > Change hosts JSON file format > - > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168656#comment-16168656 ] Íñigo Goiri commented on HDFS-12450: Thanks [~brahmareddy] and [~chris.douglas] for the review! Committed to HDFS-10467. > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12450: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12473) Change host JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168654#comment-16168654 ] Manoj Govindassamy commented on HDFS-12473: --- Thanks for making the hosts file a proper json format. Looks good overall, +1 with following nits. 1. What happens when the hosts file has improper json format? Say the old format with the new code. Will it be backward compatible or throw JsonMappingException? Because, the upgrade scenarios need to handle this. 2. {{CombinedHostsFileReader}} * If {{READER}} and {{JSON_FACTORY}} is no more used, can be removed * {{#readFile}} can now return _null_ object when there are no hosts defined. Previously it was always returning an empty HashSet. Hopefully, you verified for the callers. I see the test {{#testLoadExistingJsonFile}}assuming the return is non null always. 3. {{CombinedHostsFileWriter}} * If {{MAPPER}} is no more used, can be removed. 4. {{TestCombinedHostsFileReader}} * line 64: CombinedHostsFileReader.readFile() can return null if the input hosts file has no entries. May be we should cover this case as well? > Change host JSON file format > > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12475) Ozone : add document for port sharing with WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-12475: Labels: ozoneMerge (was: OzonePostMerge tocheck) > Ozone : add document for port sharing with WebHDFS > -- > > Key: HDFS-12475 > URL: https://issues.apache.org/jira/browse/HDFS-12475 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Chen Liang > Labels: ozoneMerge > > Currently Ozone's REST API uses the port 9864, all commands mentioned in > OzoneCommandShell.md use the address localhost:9864. > This port was used by WebHDFS and is now shared by Ozone. The value is > controlled by the config key {{dfs.datanode.http.address}}. We should > document this information in {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12467) Ozone: SCM: NodeManager should log when it comes out of chill mode
[ https://issues.apache.org/jira/browse/HDFS-12467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168646#comment-16168646 ] Chen Liang commented on HDFS-12467: --- I wonder why can't we just use {{inManualChillMode}} variable? do we really need to have the new {{chillMode}} flag? Also it seems to be changing the chillmode syntax. For example {{clearChillModeFlag()}} {{forceExitChillMode}} and {{forceEnterChillMode}} don't affect the new boolean {{chillMode}} flag. So If someone call {{getChillModeStatus}} after a {{forceEnterChillMode}}, it will be true for current code but may still be false with the patch. If the goal is to log all chill mode status change, then how about just adding log to the places where {{inManualChillMode}}'s value is changed? > Ozone: SCM: NodeManager should log when it comes out of chill mode > -- > > Key: HDFS-12467 > URL: https://issues.apache.org/jira/browse/HDFS-12467 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nandakumar >Assignee: Nandakumar >Priority: Minor > Attachments: HDFS-12467-HDFS-7240.000.patch > > > {{NodeManager}} should add a log message when it comes out of chill mode. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168641#comment-16168641 ] Hadoop QA commented on HDFS-12450: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 15s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}108m 24s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}142m 52s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestReadStripedFileWithMissingBlocks | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestReconstructStripedFile | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12450 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887416/HDFS-12450-HDFS-10467.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f582f719bed7 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10467 / 679e31a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21171/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21171/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/21171/console | | Powered by | Apache Yetus 0.6.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Fixing TestNamenodeHeartbeat and support non-HA > --- > >
[jira] [Updated] (HDFS-12470) DiskBalancer: Some tests create plan files under system directory
[ https://issues.apache.org/jira/browse/HDFS-12470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hanisha Koneru updated HDFS-12470: -- Attachment: HDFS-12470.001.patch Thanks [~ajisakaa] for reporting this. I found that only TestDiskBalancerCommand#testPlanJsonNode() creates the plan files under system directory. Attached a patch to fix this. > DiskBalancer: Some tests create plan files under system directory > - > > Key: HDFS-12470 > URL: https://issues.apache.org/jira/browse/HDFS-12470 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: diskbalancer, test >Reporter: Akira Ajisaka >Assignee: Hanisha Koneru > Fix For: 2.9.0 > > Attachments: HDFS-12470.001.patch > > > When I ran HDFS tests, plan files are created under system directory. > {noformat} > $ ls -R hadoop-hdfs-project/hadoop-hdfs/system > diskbalancer > hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer: > 2017-Sep-15-19-37-34 > hadoop-hdfs-project/hadoop-hdfs/system/diskbalancer/2017-Sep-15-19-37-34: > a87654a9-54c7-4693-8dd9-c9c7021dc340.before.json > a87654a9-54c7-4693-8dd9-c9c7021dc340.plan.json > {noformat} > All the files created by tests should be in target directory. That way the > files are ignored by git. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168620#comment-16168620 ] Manoj Govindassamy commented on HDFS-12381: --- Overall LGTM. +1. bq. For example, if we want a federated address called `/data/wl1`, it is recommended to have that same name in the destination namespace. * Federated address is not consistent with the wordings 'federated folder' mentioned in the previous line. Also for destination namespace. Either we can call it address by including the schema, authority, etc., or may be call it just federated address? Your thoughts? * It would be really helpful if we can touch upon the non-existing mount points as well. > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12475) Ozone : add document for port sharing with WebHDFS
Chen Liang created HDFS-12475: - Summary: Ozone : add document for port sharing with WebHDFS Key: HDFS-12475 URL: https://issues.apache.org/jira/browse/HDFS-12475 Project: Hadoop HDFS Issue Type: Sub-task Components: ozone Reporter: Chen Liang Currently Ozone's REST API uses the port 9864, all commands mentioned in OzoneCommandShell.md use the address localhost:9864. This port was used by WebHDFS and is now shared by Ozone. The value is controlled by the config key {{dfs.datanode.http.address}}. We should document this information in {{OzoneCommandShell.md}}. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
[ https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12375: -- Attachment: (was: hdfs-site.xml) > Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. > --- > > Key: HDFS-12375 > URL: https://issues.apache.org/jira/browse/HDFS-12375 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, scripts >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Assignee: Bharat Viswanadham > > When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding > NameServiceID, we can not start/stop journalnodes using > start-dfs.sh/stop-dfs.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
[ https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12375: -- Attachment: hdfs-site.xml > Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. > --- > > Key: HDFS-12375 > URL: https://issues.apache.org/jira/browse/HDFS-12375 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, scripts >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Assignee: Bharat Viswanadham > Attachments: hdfs-site.xml > > > When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding > NameServiceID, we can not start/stop journalnodes using > start-dfs.sh/stop-dfs.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
[ https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12375 started by Bharat Viswanadham. - > Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. > --- > > Key: HDFS-12375 > URL: https://issues.apache.org/jira/browse/HDFS-12375 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, scripts >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Assignee: Bharat Viswanadham > Attachments: hdfs-site.xml > > > When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding > NameServiceID, we can not start/stop journalnodes using > start-dfs.sh/stop-dfs.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12375) Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh.
[ https://issues.apache.org/jira/browse/HDFS-12375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168595#comment-16168595 ] Bharat Viswanadham commented on HDFS-12375: --- Hi [~vincent he] I have tried to setup HA cluster. I am able to successfully start/stop journal nodes using start-dfs.sh/stop-dfs.sh Could you please provide more info on this, and steps to reproduce the issue. > Fail to start/stop journalnodes using start-dfs.sh/stop-dfs.sh. > --- > > Key: HDFS-12375 > URL: https://issues.apache.org/jira/browse/HDFS-12375 > Project: Hadoop HDFS > Issue Type: Bug > Components: federation, scripts >Affects Versions: 3.0.0-beta1 >Reporter: Wenxin He >Assignee: Bharat Viswanadham > > When 'dfs.namenode.checkpoint.edits.dir' suffixed with the corresponding > NameServiceID, we can not start/stop journalnodes using > start-dfs.sh/stop-dfs.sh. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168594#comment-16168594 ] Íñigo Goiri commented on HDFS-12273: Thanks [~raviprak], actually, one thing is missing is the WebHDFS interface. Do you guys think it should go in HDFS-10467? In any case, that would require the web app from this JIRA. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168593#comment-16168593 ] Chris Douglas commented on HDFS-12273: -- bq. Regarding the note in the doc, I could add it here or HDFS-12381 as a general comment on security and not only about the Web UI. Adding it to HDFS-12381 is simplest. bq. XSS isn't quite related to HDFS-12284, so if at all you want to postpone the analysis, would it make sense to file a different JIRA? Sure, fair enough. Even if "harden the federation UI" is closed without requiring any code, it'd be useful for tracking. If we defer hardening the UI to after the merge, the current patch seems fine to me. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168575#comment-16168575 ] Chen Liang commented on HDFS-12454: --- Actually, my previous comment about the 9874 port thing is not quite right. The commands in current doc about using 9864 port is actually correct. Changing to whatever value won't affect this. So seems the value of {{OZONE_KSM_HTTP_BIND_PORT_DEFAULT}} is not used as I thought of, but instead, is the RPC port client uses to talk to KSM. But then do we really need to have this ksm address setting in sample config? Because it's value is used only internally and Ozone works without explicitly setting this value, just like HDFS doc does not seem to include the setting of namenode RPC port used by client? [~cheersyang] > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-12454-HDFS-7240.001.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12454) Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work
[ https://issues.apache.org/jira/browse/HDFS-12454?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168575#comment-16168575 ] Chen Liang edited comment on HDFS-12454 at 9/15/17 9:49 PM: Actually, my previous comment about the 9874 port thing is not quite right. The commands in current doc about using 9864 port is actually correct. So seems the value of {{OZONE_KSM_HTTP_BIND_PORT_DEFAULT}} is not used as I thought of, but instead, is the RPC port client uses to talk to KSM. But then do we really need to have this ksm address setting in sample config? Because it's value is used only internally and Ozone works without explicitly setting this value, just like HDFS doc does not seem to include the setting of namenode RPC port used by client? [~cheersyang] was (Author: vagarychen): Actually, my previous comment about the 9874 port thing is not quite right. The commands in current doc about using 9864 port is actually correct. Changing to whatever value won't affect this. So seems the value of {{OZONE_KSM_HTTP_BIND_PORT_DEFAULT}} is not used as I thought of, but instead, is the RPC port client uses to talk to KSM. But then do we really need to have this ksm address setting in sample config? Because it's value is used only internally and Ozone works without explicitly setting this value, just like HDFS doc does not seem to include the setting of namenode RPC port used by client? [~cheersyang] > Ozone : the sample ozone-site.xml in OzoneGettingStarted does not work > -- > > Key: HDFS-12454 > URL: https://issues.apache.org/jira/browse/HDFS-12454 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Chen Liang >Assignee: Chen Liang >Priority: Blocker > Labels: ozoneMerge > Attachments: HDFS-12454-HDFS-7240.001.patch > > > In OzoneGettingStarted.md there is a sample ozone-site.xml file. But there > are a few issues with it. > 1. > {code} > > ozone.scm.block.client.address > scm.hadoop.apache.org > > > ozone.ksm.address > ksm.hadoop.apache.org > > {code} > The value should be an address instead. > 2. > {{datanode.ObjectStoreHandler.(ObjectStoreHandler.java:103)}} requires > {{ozone.scm.client.address}} to be set, which is missing from this sample > file. Missing this config will seem to cause failure on starting datanode. > 3. > {code} > > ozone.scm.names > scm.hadoop.apache.org > > {code} > This value did not make much sense to, I found the comment in > {{ScmConfigKeys}} that says > {code} > // ozone.scm.names key is a set of DNS | DNS:PORT | IP Address | IP:PORT. > // Written as a comma separated string. e.g. scm1, scm2:8020, 7.7.7.7: > {code} > So maybe we should write something like scm1 as value here. > 4. I'm not entirely sure about this, but > [here|https://wiki.apache.org/hadoop/Ozone#Configuration] it says > {code} > > ozone.handler.type > local > > {code} > is also part of minimum setting, do we need to add this [~anu]? -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168573#comment-16168573 ] Arpit Agarwal commented on HDFS-12472: -- +1 pending Jenkins. > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch, HDFS-12472.01.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168570#comment-16168570 ] Bharat Viswanadham commented on HDFS-12472: --- Hi [~arpitagarwal] Thanks for reviewing. Updated the patch to remove getGlobalTimeout. > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch, HDFS-12472.01.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: HDFS-12472.01.patch > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch, HDFS-12472.01.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168566#comment-16168566 ] Arpit Agarwal commented on HDFS-12472: -- Hi [~bharatviswa], minor suggestion. We can get rid of getGlobalTimeout. lgtm otherwise. > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12474) Ozone: Adding key count and container usage to container report
Xiaoyu Yao created HDFS-12474: - Summary: Ozone: Adding key count and container usage to container report Key: HDFS-12474 URL: https://issues.apache.org/jira/browse/HDFS-12474 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: HDFS-7240 Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao Currently, the container report only contains the # of reports sent to SCM. We will need to provide the key count and the usage of each individual containers to update the SCM container state maintained by ContainerStateManager. This has a dependency on HDFS-12387. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168557#comment-16168557 ] Ravi Prakash commented on HDFS-12273: - Sorry I've since swapped out all front-end and security knowledge from my brain. But for the new UI, I had to add https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/web/webhdfs/WebHdfsHandler.java#L315 . My guide were the things done for {{onOpen}} / {{onAppend}} etc. XSS isn't quite related to HDFS-12284, so if at all you want to postpone the analysis, would it make sense to file a different JIRA? > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168553#comment-16168553 ] Hudson commented on HDFS-12323: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12889 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/12889/]) HDFS-12323. NameNode terminates after full GC thinking QJM unresponsive (shv: rev 90894c7262df0243e795b675f3ac9f7b322ccd11) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/client/TestQuorumCall.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/client/QuorumCall.java > NameNode terminates after full GC thinking QJM unresponsive if full GC is > much longer than timeout > -- > > Key: HDFS-12323 > URL: https://issues.apache.org/jira/browse/HDFS-12323 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, qjm >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.1.0 > > Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, > HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch > > > HDFS-10733 attempted to fix the issue where the Namenode process would > terminate itself if it had a GC pause which lasted longer than the QJM > timeout, since it would think that the QJM had taken too long to respond. > However, it only bumps up the timeout expiration by one timeout length, so if > the GC pause was e.g. 2x the length of the timeout, a TimeoutException will > be thrown and the NN will still terminate itself. > Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we > have also seen this issue on a real cluster even after HDFS-10733 is applied. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12420) Disable Namenode format for prod clusters when data already exists
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168500#comment-16168500 ] Ajay Kumar edited comment on HDFS-12420 at 9/15/17 9:30 PM: fixed jenkins issues. was (Author: ajayydv): fixing jenkins issues. > Disable Namenode format for prod clusters when data already exists > -- > > Key: HDFS-12420 > URL: https://issues.apache.org/jira/browse/HDFS-12420 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, > HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, > HDFS-12420.06.patch, HDFS-12420.07.patch > > > Disable NameNode format to avoid accidental formatting of Namenode in > production cluster. If someone really wants to delete the complete fsImage, > they can first delete the metadata dir and then run {code} hdfs namenode > -format{code} manually. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: HDFS-12472.00.patch > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: (was: HADOOP-12472.patch) > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: HADOOP-12472.00.patch > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: (was: HADOOP-12472.00.patch) > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HDFS-12472.00.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168546#comment-16168546 ] Bharat Viswanadham commented on HDFS-12472: --- [~eddyxu] Could you help in reviewing the changes. > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-12472.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168545#comment-16168545 ] Andrew Wang commented on HDFS-12323: Given that this is going into maintenance releases, any reason not to put this into branch-3.0 for beta1 also? > NameNode terminates after full GC thinking QJM unresponsive if full GC is > much longer than timeout > -- > > Key: HDFS-12323 > URL: https://issues.apache.org/jira/browse/HDFS-12323 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, qjm >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.1.0 > > Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, > HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch > > > HDFS-10733 attempted to fix the issue where the Namenode process would > terminate itself if it had a GC pause which lasted longer than the QJM > timeout, since it would think that the QJM had taken too long to respond. > However, it only bumps up the timeout expiration by one timeout length, so if > the GC pause was e.g. 2x the length of the timeout, a TimeoutException will > be thrown and the NN will still terminate itself. > Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we > have also seen this issue on a real cluster even after HDFS-10733 is applied. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168543#comment-16168543 ] Erik Krogen commented on HDFS-12323: Thank you [~shv]! Should this also go in branch-3? > NameNode terminates after full GC thinking QJM unresponsive if full GC is > much longer than timeout > -- > > Key: HDFS-12323 > URL: https://issues.apache.org/jira/browse/HDFS-12323 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, qjm >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.1.0 > > Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, > HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch > > > HDFS-10733 attempted to fix the issue where the Namenode process would > terminate itself if it had a GC pause which lasted longer than the QJM > timeout, since it would think that the QJM had taken too long to respond. > However, it only bumps up the timeout expiration by one timeout length, so if > the GC pause was e.g. 2x the length of the timeout, a TimeoutException will > be thrown and the NN will still terminate itself. > Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we > have also seen this issue on a real cluster even after HDFS-10733 is applied. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Status: Patch Available (was: In Progress) > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-12472.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-12472: -- Attachment: HADOOP-12472.patch > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > Attachments: HADOOP-12472.patch > > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDFS-12472 started by Bharat Viswanadham. - > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12323) NameNode terminates after full GC thinking QJM unresponsive if full GC is much longer than timeout
[ https://issues.apache.org/jira/browse/HDFS-12323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-12323: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.1.0 2.7.5 2.8.3 2.9.0 Status: Resolved (was: Patch Available) I just committed this to trunk and branches 2 through 2.7. Thank you [~xkrogen]. > NameNode terminates after full GC thinking QJM unresponsive if full GC is > much longer than timeout > -- > > Key: HDFS-12323 > URL: https://issues.apache.org/jira/browse/HDFS-12323 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode, qjm >Affects Versions: 2.7.4 >Reporter: Erik Krogen >Assignee: Erik Krogen > Fix For: 2.9.0, 2.8.3, 2.7.5, 3.1.0 > > Attachments: HDFS-12323.000.patch, HDFS-12323.001.patch, > HDFS-12323.002.patch, HDFS-12323.003.patch, HDFS-12323.004.patch > > > HDFS-10733 attempted to fix the issue where the Namenode process would > terminate itself if it had a GC pause which lasted longer than the QJM > timeout, since it would think that the QJM had taken too long to respond. > However, it only bumps up the timeout expiration by one timeout length, so if > the GC pause was e.g. 2x the length of the timeout, a TimeoutException will > be thrown and the NN will still terminate itself. > Thanks to [~yangjiandan] for noting this issue as a comment on HDFS-10733; we > have also seen this issue on a real cluster even after HDFS-10733 is applied. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11873) Ozone: Object store handler cannot serve multiple requests from single http client
[ https://issues.apache.org/jira/browse/HDFS-11873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11873: -- Attachment: HDFS-11873-HDFS-7240.002.patch Attach a patch that fixed the checkstyle issue in the unit tests. > Ozone: Object store handler cannot serve multiple requests from single http > client > -- > > Key: HDFS-11873 > URL: https://issues.apache.org/jira/browse/HDFS-11873 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Weiwei Yang >Assignee: Xiaoyu Yao >Priority: Critical > Labels: ozoneMerge > Attachments: HDFS-11873-HDFS-7240.001.patch, > HDFS-11873-HDFS-7240.002.patch, HDFS-11873-HDFS-7240.testcase.patch > > > This issue was found when I worked on HDFS-11846. Instead of creating a new > http client instance per request, I tried to reuse {{CloseableHttpClient}} in > {{OzoneClient}} class in a {{PoolingHttpClientConnectionManager}}. However, > every second request from the http client hangs, which could not get > dispatched to {{ObjectStoreJerseyContainer}}. There seems to be something > wrong in the netty pipeline, this jira aims to 1) fix the problem in the > server side 2) use the pool for client http clients to reduce the resource > overhead. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12473) Change host JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168521#comment-16168521 ] Ming Ma edited comment on HDFS-12473 at 9/15/17 9:09 PM: - Here is the draft patch. cc [~eddyxu] and [~manojg]. was (Author: mingma): Here is the draft patch. > Change host JSON file format > > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12447) Refactor addErasureCodingPolicy
[ https://issues.apache.org/jira/browse/HDFS-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168524#comment-16168524 ] Hadoop QA commented on HDFS-12447: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 6m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 5 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 13s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 84m 55s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}131m 1s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.namenode.TestNamenodeRetryCache | | | hadoop.hdfs.TestAppendDifferentChecksum | | | hadoop.hdfs.TestFileAppend2 | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | Timed out junit tests | org.apache.hadoop.hdfs.TestWriteReadStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12447 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887329/HDFS-12447.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux bb0aa7ac6f5b 3.13.0-123-generic #172-Ubuntu SMP Mon Jun 26 18:04:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3a8d57a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 |
[jira] [Updated] (HDFS-12473) Change host JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-12473: --- Issue Type: Sub-task (was: Bug) Parent: HDFS-7877 > Change host JSON file format > > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12473) Change host JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-12473: --- Assignee: Ming Ma Status: Patch Available (was: Open) > Change host JSON file format > > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ming Ma >Assignee: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12473) Change host JSON file format
[ https://issues.apache.org/jira/browse/HDFS-12473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ming Ma updated HDFS-12473: --- Attachment: HDFS-12473.patch Here is the draft patch. > Change host JSON file format > > > Key: HDFS-12473 > URL: https://issues.apache.org/jira/browse/HDFS-12473 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ming Ma > Attachments: HDFS-12473.patch > > > The existing host JSON file format doesn't have a top-level token. > {noformat} > {"hostName": "host1"} > {"hostName": "host2", "upgradeDomain": "ud0"} > {"hostName": "host3", "adminState": "DECOMMISSIONED"} > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"} > {"hostName": "host5", "port": 8090} > {"hostName": "host6", "adminState": "IN_MAINTENANCE"} > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > {noformat} > Instead, to conform with the JSON standard it should be like > {noformat} > [ > {"hostName": "host1"}, > {"hostName": "host2", "upgradeDomain": "ud0"}, > {"hostName": "host3", "adminState": "DECOMMISSIONED"}, > {"hostName": "host4", "upgradeDomain": "ud2", "adminState": > "DECOMMISSIONED"}, > {"hostName": "host5", "port": 8090}, > {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, > {"hostName": "host7", "adminState": "IN_MAINTENANCE", > "maintenanceExpireTimeInMS": "112233"} > ] > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-12473) Change host JSON file format
Ming Ma created HDFS-12473: -- Summary: Change host JSON file format Key: HDFS-12473 URL: https://issues.apache.org/jira/browse/HDFS-12473 Project: Hadoop HDFS Issue Type: Bug Reporter: Ming Ma The existing host JSON file format doesn't have a top-level token. {noformat} {"hostName": "host1"} {"hostName": "host2", "upgradeDomain": "ud0"} {"hostName": "host3", "adminState": "DECOMMISSIONED"} {"hostName": "host4", "upgradeDomain": "ud2", "adminState": "DECOMMISSIONED"} {"hostName": "host5", "port": 8090} {"hostName": "host6", "adminState": "IN_MAINTENANCE"} {"hostName": "host7", "adminState": "IN_MAINTENANCE", "maintenanceExpireTimeInMS": "112233"} {noformat} Instead, to conform with the JSON standard it should be like {noformat} [ {"hostName": "host1"}, {"hostName": "host2", "upgradeDomain": "ud0"}, {"hostName": "host3", "adminState": "DECOMMISSIONED"}, {"hostName": "host4", "upgradeDomain": "ud2", "adminState": "DECOMMISSIONED"}, {"hostName": "host5", "port": 8090}, {"hostName": "host6", "adminState": "IN_MAINTENANCE"}, {"hostName": "host7", "adminState": "IN_MAINTENANCE", "maintenanceExpireTimeInMS": "112233"} ] {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11612) Ozone: Cleanup Checkstyle issues
[ https://issues.apache.org/jira/browse/HDFS-11612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168503#comment-16168503 ] Anu Engineer edited comment on HDFS-11612 at 9/15/17 8:54 PM: -- [~shashikant] Thank you for the contribution. I have committed this to the feature branch. was (Author: anu): [~shashikant] Thank for the contribution. I have committed this to the feature branch. > Ozone: Cleanup Checkstyle issues > > > Key: HDFS-11612 > URL: https://issues.apache.org/jira/browse/HDFS-11612 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Shashikant Banerjee >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11612-HDFS-7240.001.patch > > > There is a bunch of check style issues under Ozone tree. We have to clean > them up before we call for a merge of this tree. This jira tracks that work > item. It would be a noisy but mostly content less change. Hence it is easier > to track that in separate patch -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11612) Ozone: Cleanup Checkstyle issues
[ https://issues.apache.org/jira/browse/HDFS-11612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-11612: Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) [~shashikant] Thank for the contribution. I have committed this to the feature branch. > Ozone: Cleanup Checkstyle issues > > > Key: HDFS-11612 > URL: https://issues.apache.org/jira/browse/HDFS-11612 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Shashikant Banerjee >Priority: Blocker > Labels: ozoneMerge > Fix For: HDFS-7240 > > Attachments: HDFS-11612-HDFS-7240.001.patch > > > There is a bunch of check style issues under Ozone tree. We have to clean > them up before we call for a merge of this tree. This jira tracks that work > item. It would be a noisy but mostly content less change. Hence it is easier > to track that in separate patch -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12420) Disable Namenode format for prod clusters when data already exists
[ https://issues.apache.org/jira/browse/HDFS-12420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-12420: -- Attachment: HDFS-12420.07.patch fixing jenkins issues. > Disable Namenode format for prod clusters when data already exists > -- > > Key: HDFS-12420 > URL: https://issues.apache.org/jira/browse/HDFS-12420 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar > Attachments: HDFS-12420.01.patch, HDFS-12420.02.patch, > HDFS-12420.03.patch, HDFS-12420.04.patch, HDFS-12420.05.patch, > HDFS-12420.06.patch, HDFS-12420.07.patch > > > Disable NameNode format to avoid accidental formatting of Namenode in > production cluster. If someone really wants to delete the complete fsImage, > they can first delete the metadata dir and then run {code} hdfs namenode > -format{code} manually. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12273) Federation UI
[ https://issues.apache.org/jira/browse/HDFS-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168497#comment-16168497 ] Íñigo Goiri commented on HDFS-12273: Thanks [~chris.douglas] for the check. I'm OK with handling security issues related to the Web UI in HDFS-12284 or maybe a follow-up JIRA. [~zhengxg3], [~zhz], does it make sense to do it in HDFS-12284? Regarding the note in the doc, I could add it here or HDFS-12381 as a general comment on security and not only about the Web UI. > Federation UI > - > > Key: HDFS-12273 > URL: https://issues.apache.org/jira/browse/HDFS-12273 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: federationUI-1.png, federationUI-2.png, > federationUI-3.png, HDFS-12273-HDFS-10467-000.patch, > HDFS-12273-HDFS-10467-001.patch, HDFS-12273-HDFS-10467-002.patch, > HDFS-12273-HDFS-10467-003.patch, HDFS-12273-HDFS-10467-004.patch > > > Add the Web UI to the Router to expose the status of the federated cluster. > It includes the federation metrics. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168485#comment-16168485 ] Íñigo Goiri edited comment on HDFS-12381 at 9/15/17 8:41 PM: - #2 and #3 sound good. For #1, you are talking about adding this in the documentation? If so, that's OK with me. In the deployment section or somewhere else? Feel free to ask for any other documentation addition. I've been heads down with this for a while and everything seems obvious to me and I'm assuming a lot of things. The easier it is to understand, the more chances for this to be adopted. was (Author: elgoiri): #2 and #3 sounds good. For #1, you are talking about adding this in the documentation? If so, that's OK with me. In the deployment section or somewhere else? Feel free to ask for any other documentation addition. I've been heads down with this for a while and everything seems obvious to me and I'm assuming a lot of things. The easier it is to understand, the more chances for this to be adopted. > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168487#comment-16168487 ] Chris Douglas commented on HDFS-12450: -- Yup +1 > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168485#comment-16168485 ] Íñigo Goiri commented on HDFS-12381: #2 and #3 sounds good. For #1, you are talking about adding this in the documentation? If so, that's OK with me. In the deployment section or somewhere else? Feel free to ask for any other documentation addition. I've been heads down with this for a while and everything seems obvious to me and I'm assuming a lot of things. The easier it is to understand, the more chances for this to be adopted. > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12381) [Documentation] Adding configuration keys for the Router
[ https://issues.apache.org/jira/browse/HDFS-12381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168475#comment-16168475 ] Brahma Reddy Battula commented on HDFS-12381: - [~goirix] thanks for updating the patch. I am thinking following also,sorry to trouble you. 1) can we add stop command for router..? may be start command also update like " hdfs --daemon start router"..? 2) how about adding default value to each property in the table(so totally three colums)..? and change description remaining boolean as i mentioned in earlier comment..? 3) Fix following two typos also,like below..? bq.Advanced functions like snapshotting, encryption Advanced functions like snapshot, encryption bq.Adminstrators can query information Administrators can query information > [Documentation] Adding configuration keys for the Router > > > Key: HDFS-12381 > URL: https://issues.apache.org/jira/browse/HDFS-12381 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Fix For: HDFS-10467 > > Attachments: HDFS-12381-HDFS-10467.000.patch, > HDFS-12381-HDFS-10467.001.patch > > > Adding configuration options in tabular format. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168458#comment-16168458 ] Íñigo Goiri commented on HDFS-12450: Jenkins and I collided :) Uploaded a patch with the fix. I assume [~brahmareddy]'s +1 stands. [~chris.douglas], good to go? > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12450: --- Attachment: HDFS-12450-HDFS-10467.002.patch > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch, HDFS-12450-HDFS-10467.002.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168454#comment-16168454 ] Hadoop QA commented on HDFS-12450: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-10467 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 36s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} HDFS-10467 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} HDFS-10467 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 30s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 48s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 51s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.TestWriteReadStripedFile | | | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.TestLeaseRecoveryStriped | | | hadoop.hdfs.TestClientProtocolForPipelineRecovery | | | hadoop.hdfs.server.namenode.TestReencryptionWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:71bbb86 | | JIRA Issue | HDFS-12450 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12887404/HDFS-12450-HDFS-10467.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7afbf3b76091 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10467 / 679e31a | | Default Java | 1.8.0_144 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/21168/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/21168/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/21168/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output |
[jira] [Commented] (HDFS-12450) Fixing TestNamenodeHeartbeat and support non-HA
[ https://issues.apache.org/jira/browse/HDFS-12450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168455#comment-16168455 ] Íñigo Goiri commented on HDFS-12450: The {{keySet()}} fix looks good to me. Tested it in the cluster and works as expected. I think there is an unused import now there but I'll let jenkins come back with it. > Fixing TestNamenodeHeartbeat and support non-HA > --- > > Key: HDFS-12450 > URL: https://issues.apache.org/jira/browse/HDFS-12450 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri > Fix For: HDFS-10467 > > Attachments: HDFS-12450-HDFS-10467.000.patch, > HDFS-12450-HDFS-10467.001.patch > > > The way the service RPC address is obtained changed and showed a problem with > {{TestNamenodeHeartbeat}} where the address wasn't properly set for the unit > tests. > In addition, the {{NamenodeHeartbeatService}} did not provide a good > experience for non-HA nameservices. This also covers a better logging for > those. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12441) Suppress UnresolvedPathException in namenode log
[ https://issues.apache.org/jira/browse/HDFS-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Roberts updated HDFS-12441: -- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) > Suppress UnresolvedPathException in namenode log > > > Key: HDFS-12441 > URL: https://issues.apache.org/jira/browse/HDFS-12441 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HDFS-12441.patch > > > {{UnresolvedPathException}} as a normal process of resolving symlinks. This > doesn't need to be logged at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12441) Suppress UnresolvedPathException in namenode log
[ https://issues.apache.org/jira/browse/HDFS-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16168433#comment-16168433 ] Nathan Roberts commented on HDFS-12441: --- Cherry picked to branch-3.0, branch-2, and branch-2.8. > Suppress UnresolvedPathException in namenode log > > > Key: HDFS-12441 > URL: https://issues.apache.org/jira/browse/HDFS-12441 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HDFS-12441.patch > > > {{UnresolvedPathException}} as a normal process of resolving symlinks. This > doesn't need to be logged at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-12472) Add JUNIT timeout to TestBlockStatsMXBean
[ https://issues.apache.org/jira/browse/HDFS-12472?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham reassigned HDFS-12472: - Assignee: Bharat Viswanadham > Add JUNIT timeout to TestBlockStatsMXBean > -- > > Key: HDFS-12472 > URL: https://issues.apache.org/jira/browse/HDFS-12472 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Lei (Eddy) Xu >Assignee: Bharat Viswanadham >Priority: Minor > > Add Junit timeout to {{TestBlockStatsMXBean}} so that it can show up in the > test failure report if timeout occurs. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12441) Suppress UnresolvedPathException in namenode log
[ https://issues.apache.org/jira/browse/HDFS-12441?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nathan Roberts updated HDFS-12441: -- Fix Version/s: 2.8.3 2.9.0 > Suppress UnresolvedPathException in namenode log > > > Key: HDFS-12441 > URL: https://issues.apache.org/jira/browse/HDFS-12441 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Kihwal Lee >Assignee: Kihwal Lee >Priority: Minor > Fix For: 2.9.0, 3.0.0-beta1, 2.8.3, 3.1.0 > > Attachments: HDFS-12441.patch > > > {{UnresolvedPathException}} as a normal process of resolving symlinks. This > doesn't need to be logged at all. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org