[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16920118#comment-16920118 ] Surendra Singh Lilhore commented on HDFS-11291: --- Thanks [~hemanthboyina] for patch. {quote}One more change required is, in both cases, where there is no edit txn added, no need to wait for logSync(). {quote} [~vinayakumarb], logSync() can be avoid by checking the current call transaction ID, which will be stored in {{FSEditLog.myTransactionId}}, if edit logging is skipped then {{myTransactionId}} will be {{Long.MAX_VALUE}}. If the value is {{Long.MAX_VALUE}} then no need to do the {{logSync()}}. {quote}Also I have doubt whether its correct to log audit when there is no change done and no edit txn added. {quote} I feel this should be done, because for user this operation is done and whatever value he given it is available in namespace. [~vinayakumarb], What is your opinion ? > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch, > HDFS-11291.003.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918901#comment-16918901 ] hemanthboyina commented on HDFS-11291: -- uploaded patch , please check [~surendrasingh] > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch, > HDFS-11291.003.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16918184#comment-16918184 ] Hadoop QA commented on HDFS-11291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 41s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 349 unchanged - 2 fixed = 349 total (was 351) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 80m 51s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}137m 16s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.datanode.TestDataNodeMetrics | | | hadoop.hdfs.tools.TestDFSZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.1 Server=19.03.1 Image:yetus/hadoop:bdbca0e53b4 | | JIRA Issue | HDFS-11291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12978809/HDFS-11291.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux d2c9c1e2fd97 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6f2226a | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/27705/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/27705/testReport/ | | Max. process+thread count | 3
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917684#comment-16917684 ] Hadoop QA commented on HDFS-11291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-11291 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12845814/HDFS-11291.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/27700/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917611#comment-16917611 ] Surendra Singh Lilhore commented on HDFS-11291: --- Thanks [~hemanthboyina]. Assigned to you... > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: hemanthboyina >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917450#comment-16917450 ] hemanthboyina commented on HDFS-11291: -- Hi [~surendrasingh] , I would like to work on this > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917447#comment-16917447 ] Surendra Singh Lilhore commented on HDFS-11291: --- This issue should be fixed... I saw some log where unnecessary edit logs are getting log which is not required {code:java} 2019-08-27 10:50:07,281 INFO org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: Fast-forwarding stream 'http://smc-nn02.jq:8480/getJournal?jid=nameservice1&segmentTxId=4358634905&storageInfo=-64%3A1209372736%3A1547031657597%3Acluster7&inProgressOk=true, http://smc-nn03.jq:8480/getJournal?jid=nameservice1&segmentTxId=4358634905&storageInfo=-64%3A1209372736%3A1547031657597%3Acluster7&inProgressOk=true, http://smc-nn01.jq:8480/getJournal?jid=nameservice1&segmentTxId=4358634905&storageInfo=-64%3A1209372736%3A1547031657597%3Acluster7&inProgressOk=true' to transaction ID 4358634905 2019-08-27 10:50:07,281 INFO org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream: Fast-forwarding stream 'http://smc-nn02.jq:8480/getJournal?jid=nameservice1&segmentTxId=4358634905&storageInfo=-64%3A1209372736%3A1547031657597%3Acluster7&inProgressOk=true' to transaction ID 4358634905 2019-08-27 10:50:07,651 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication from 2 to 2 for /user/smctest/.sparkStaging/application_1561429828507_20410/__spark_libs__4604495262435387108.zip 2019-08-27 10:50:07,665 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication from 2 to 2 for /user/smctest/.sparkStaging/application_1561429828507_20410/__spark_conf__.zip 2019-08-27 10:50:07,816 INFO org.apache.hadoop.hdfs.server.namenode.FSDirectory: Increasing replication from 2 to 2 for /user/smctest/.sparkStaging/application_1561429828507_20411/__spark_libs__4607202820795793784.zip {code} Here file replication is already 2 and it is again setting 2. This type of audit log is not required... > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore >Priority: Major > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970263#comment-15970263 ] Hadoop QA commented on HDFS-11291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 5s{color} | {color:red} Docker failed to build yetus/hadoop:a9ad5d6. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-11291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12845814/HDFS-11291.002.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/19102/console | | Powered by | Apache Yetus 0.5.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15970260#comment-15970260 ] Surendra Singh Lilhore commented on HDFS-11291: --- Thanks [~vinayrpet], bq. Also I have doubt whether its correct to log audit when there is no change done and no edit txn added. Hi [~arpitagarwal], Can you give your view on this ? > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15831543#comment-15831543 ] Vinayakumar B commented on HDFS-11291: -- bq. I think this change is not required in unprotectedSetStoragePolicy(). setStoragePolicy() will just return void. yes, you are right. it doesnt return anything. One more change required is, in both cases, where there is no edit txn added, no need to wait for logSync(). So I feel, if want to skip logSync() and still return success response to client, some special refactor required in return types of {{FSDirAttrOp#setStoragePolicy(..)}} and {{FSDirAttrOp#setReplication(..)}} to indicate change not done due to same value and skip logSync() (and audit also, if not required) Also I have doubt whether its correct to log audit when there is no change done and no edit txn added. May be [~andrew.wang]/[~arpitagarwal] can give their view on this. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15826013#comment-15826013 ] Surendra Singh Lilhore commented on HDFS-11291: --- I think this change is not required in {{unprotectedSetStoragePolicy()}}. {{setStoragePolicy()}} will just return void. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15825980#comment-15825980 ] Vinayakumar B commented on HDFS-11291: -- bq. If we add this check in unprotectedSetReplication() then FSNamesystem.setReplication(..) will return false and I think this is wrong.. If you want to make that operation idempotent when old and new RF is same, then same should be done for {{unprotectedSetStoragePolicy()}} when old and new storagepolicy are same. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15825893#comment-15825893 ] Surendra Singh Lilhore commented on HDFS-11291: --- Thanks [~vinayrpet] for review.. bq. for setReplication() following check can be added directly inside unprotectedSetReplication() and avoid other changes related to this. If we add this check in unprotectedSetReplication() then FSNamesystem.setReplication(..) will return false and I think this is wrong.. I feel, we should return true even old and new replication is same... > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15825748#comment-15825748 ] Vinayakumar B commented on HDFS-11291: -- for setReplication() following check can be added directly inside unprotectedSetReplication() and avoid other changes related to this. {code} if (inode.asFile().getPreferredBlockReplication() == replication) { return null; }{code} Can fix whitespaces also together +1 once addressed, > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15807066#comment-15807066 ] Surendra Singh Lilhore commented on HDFS-11291: --- Thanks [~linyiqun] for review.. bq. Now I see there are some checkstyle and whitespace warnings generated, would you have a clean up? Checkstyle is related to method length and I think we no need to fix. Whitespace I will remove in next patch.. I will wait for others review. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15803531#comment-15803531 ] Yiqun Lin commented on HDFS-11291: -- Thanks [~surendrasingh] for updateing the patch. Now I see there are some checkstyle and whitespace warnings generated, would you have a clean up? +1 once that are addressed. Please wait binding +1 from others. Thanks. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch, HDFS-11291.002.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15801883#comment-15801883 ] Hadoop QA commented on HDFS-11291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 28s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 370 unchanged - 1 fixed = 371 total (was 371) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 6s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}107m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.datanode.TestDataNodeMultipleRegistrations | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureToleration | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12845814/HDFS-11291.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a16a7e9206fc 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a605ff3 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18036/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18036/testReport/ | | modules | C: hadoop-
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800087#comment-15800087 ] Yiqun Lin commented on HDFS-11291: -- Thanks [~surendrasingh] for the work on this. Yes, it will avoid extra rpc calls. And I looked into your patch, it almost looks good to me. Only one place I noticed: In the patch, you use the method {{INodeFile#getFileReplication}} to get the old replication, {code} + if (inode.asFile().getFileReplication() == replication) { +return true; + } {code} However, the original code uses the method {{INodeFile#getPreferredBlockReplication}} (get the max replication value between file blocks and file) {code} short oldBR = file.getPreferredBlockReplication(); {code} Should we keep this consistent? In addition, can you test failure test {{hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer}} in your local, it seems related. Others look good to me. Thanks. > Avoid unnecessary edit log for setStoragePolicy() and setReplication() > -- > > Key: HDFS-11291 > URL: https://issues.apache.org/jira/browse/HDFS-11291 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Surendra Singh Lilhore >Assignee: Surendra Singh Lilhore > Attachments: HDFS-11291.001.patch > > > We are setting the storage policy for file without checking the current > policy of file for avoiding extra getStoragePolicy() rpc call. Currently > namenode is not checking the current storage policy before setting new one > and adding edit logs. I think if the old and new storage policy is same we > can avoid set operation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11291) Avoid unnecessary edit log for setStoragePolicy() and setReplication()
[ https://issues.apache.org/jira/browse/HDFS-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15799120#comment-15799120 ] Hadoop QA commented on HDFS-11291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 27s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 277 unchanged - 0 fixed = 280 total (was 277) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 31s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 51s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 29s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.balancer.TestBalancer | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer | | | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestSafeModeWithStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12845579/HDFS-11291.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux f93d9067d781 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / a0a2761 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/18019/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/18019/artifact/patchprocess/whitespace-eol.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/18019/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/18019/testReport/ | | modules | C: hadoop-hdfs-project/had