[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316958=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316958 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 23/Sep/19 20:22 Start Date: 23/Sep/19 20:22 Worklog Time Spent: 10m Work Description: anuengineer commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-534268476 @dineshchitlangia Thank you for the contribution. I have committed this patch to the trunk. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316958) Time Spent: 2.5h (was: 2h 20m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 2.5h > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable, else create a new > instance and save to table -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316959=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316959 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 23/Sep/19 20:22 Start Date: 23/Sep/19 20:22 Worklog Time Spent: 10m Work Description: anuengineer commented on pull request #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316959) Time Spent: 2h 40m (was: 2.5h) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 2h 40m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable, else create a new > instance and save to table -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316149=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316149 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 13:24 Start Date: 21/Sep/19 13:24 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533797885 failures not related to patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316149) Time Spent: 2h 20m (was: 2h 10m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 2h 20m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316067=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316067 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 06:01 Start Date: 21/Sep/19 06:01 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533771220 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1368 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for branch | | -1 | mvninstall | 31 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 24 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 49 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 836 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 22 | hadoop-hdds in trunk failed. | | -1 | javadoc | 19 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 933 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 30 | hadoop-hdds in trunk failed. | | -1 | findbugs | 20 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 18 | Maven dependency ordering for patch | | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 29 | hadoop-ozone in the patch failed. | | -1 | compile | 25 | hadoop-hdds in the patch failed. | | -1 | compile | 19 | hadoop-ozone in the patch failed. | | -1 | cc | 25 | hadoop-hdds in the patch failed. | | -1 | cc | 19 | hadoop-ozone in the patch failed. | | -1 | javac | 25 | hadoop-hdds in the patch failed. | | -1 | javac | 19 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 684 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 21 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 30 | hadoop-hdds in the patch failed. | | -1 | findbugs | 20 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 23 | hadoop-ozone in the patch failed. | | +1 | asflicense | 32 | The patch does not generate ASF License warnings. | | | | 3635 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux b026b2a094fd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / efed445 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall |
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316060=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316060 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 04:56 Start Date: 21/Sep/19 04:56 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533767921 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316060) Time Spent: 2h (was: 1h 50m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 2h > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316052=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316052 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 04:34 Start Date: 21/Sep/19 04:34 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533766852 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 42 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for branch | | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 52 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | -1 | shadedclient | 102 | branch has errors when building and testing our client artifacts. | | -1 | javadoc | 44 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 183 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 25 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. | | -1 | compile | 24 | hadoop-ozone in the patch failed. | | -1 | cc | 24 | hadoop-ozone in the patch failed. | | -1 | javac | 24 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 51 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | -1 | shadedclient | 30 | patch has errors when building and testing our client artifacts. | | -1 | javadoc | 46 | hadoop-ozone in the patch failed. | | -1 | findbugs | 22 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 243 | hadoop-hdds in the patch failed. | | -1 | unit | 24 | hadoop-ozone in the patch failed. | | +1 | asflicense | 27 | The patch does not generate ASF License warnings. | | | | 1732 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.block.TestBlockManager | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux aeefee5b3fd6 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7d6ec8 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/testReport/ |
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316044=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316044 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 03:13 Start Date: 21/Sep/19 03:13 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533762852 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for branch | | -1 | mvninstall | 29 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 50 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | -1 | shadedclient | 97 | branch has errors when building and testing our client artifacts. | | -1 | javadoc | 43 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 161 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 24 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 16 | Maven dependency ordering for patch | | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. | | -1 | compile | 23 | hadoop-ozone in the patch failed. | | -1 | cc | 23 | hadoop-ozone in the patch failed. | | -1 | javac | 23 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 50 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | -1 | shadedclient | 30 | patch has errors when building and testing our client artifacts. | | -1 | javadoc | 46 | hadoop-ozone in the patch failed. | | -1 | findbugs | 23 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 229 | hadoop-hdds in the patch failed. | | -1 | unit | 26 | hadoop-ozone in the patch failed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 1666 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 5864805974c4 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7d6ec8 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-compile-hadoop-ozone.txt | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-unit-hadoop-hdds.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/3/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results |
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316042=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316042 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 02:46 Start Date: 21/Sep/19 02:46 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533761420 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for branch | | -1 | mvninstall | 28 | hadoop-ozone in trunk failed. | | -1 | compile | 22 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 57 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | -1 | shadedclient | 110 | branch has errors when building and testing our client artifacts. | | -1 | javadoc | 45 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 157 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 23 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. | | -1 | compile | 22 | hadoop-ozone in the patch failed. | | -1 | cc | 22 | hadoop-ozone in the patch failed. | | -1 | javac | 22 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 27 | hadoop-ozone: The patch generated 2 new + 0 unchanged - 0 fixed = 2 total (was 0) | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 1 | The patch has no whitespace issues. | | -1 | shadedclient | 32 | patch has errors when building and testing our client artifacts. | | -1 | javadoc | 44 | hadoop-ozone in the patch failed. | | -1 | findbugs | 23 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | +1 | unit | 233 | hadoop-hdds in the patch passed. | | -1 | unit | 22 | hadoop-ozone in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 1693 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.1 Server=19.03.1 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 702b37690e30 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7d6ec8 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-compile-hadoop-ozone.txt | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/diff-checkstyle-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-findbugs-hadoop-ozone.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/artifact/out/patch-unit-hadoop-ozone.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/2/testReport/ | | Max. process+thread count | 511 (vs.
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316040=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316040 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 02:39 Start Date: 21/Sep/19 02:39 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533761076 /restest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316040) Time Spent: 1h 10m (was: 1h) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 1h 10m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316041=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316041 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 02:39 Start Date: 21/Sep/19 02:39 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533761103 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316041) Time Spent: 1h 20m (was: 1h 10m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 1h 20m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316039=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316039 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 02:39 Start Date: 21/Sep/19 02:39 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533761076 /restest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316039) Time Spent: 1h (was: 50m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 1h > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=316035=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-316035 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 21/Sep/19 02:15 Start Date: 21/Sep/19 02:15 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533759741 /retest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 316035) Time Spent: 50m (was: 40m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 50m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=315987=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315987 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 20/Sep/19 22:57 Start Date: 20/Sep/19 22:57 Worklog Time Spent: 10m Work Description: hadoop-yetus commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533736737 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 71 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for branch | | -1 | mvninstall | 30 | hadoop-ozone in trunk failed. | | -1 | compile | 21 | hadoop-ozone in trunk failed. | | -0 | checkstyle | 34 | The patch fails to run checkstyle in hadoop-ozone | | +1 | mvnsite | 0 | trunk passed | | -1 | shadedclient | 107 | branch has errors when building and testing our client artifacts. | | -1 | javadoc | 46 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 156 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 24 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 15 | Maven dependency ordering for patch | | -1 | mvninstall | 30 | hadoop-ozone in the patch failed. | | -1 | compile | 22 | hadoop-ozone in the patch failed. | | -1 | cc | 22 | hadoop-ozone in the patch failed. | | -1 | javac | 22 | hadoop-ozone in the patch failed. | | -0 | checkstyle | 26 | The patch fails to run checkstyle in hadoop-ozone | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | -1 | shadedclient | 32 | patch has errors when building and testing our client artifacts. | | -1 | javadoc | 46 | hadoop-ozone in the patch failed. | | -1 | findbugs | 24 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 253 | hadoop-hdds in the patch failed. | | -1 | unit | 25 | hadoop-ozone in the patch failed. | | +1 | asflicense | 28 | The patch does not generate ASF License warnings. | | | | 1739 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdds.scm.container.placement.algorithms.TestSCMContainerPlacementRackAware | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.2 Server=19.03.2 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1491 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux e35d020848de 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / d7d6ec8 | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1491/out/maven-branch-checkstyle-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out/patch-compile-hadoop-ozone.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1491/1/artifact/out//home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1491/out/maven-patch-checkstyle-hadoop-ozone.txt | | javadoc |
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=315979=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315979 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 20/Sep/19 22:37 Start Date: 20/Sep/19 22:37 Worklog Time Spent: 10m Work Description: anuengineer commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533732606 +1, pending Jenkins. I think you don't need the change in KeymanagerImpl. But no harm in doing that change. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 315979) Time Spent: 0.5h (was: 20m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 0.5h > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=315976=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315976 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 20/Sep/19 22:34 Start Date: 20/Sep/19 22:34 Worklog Time Spent: 10m Work Description: anuengineer commented on issue #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491#issuecomment-533732216 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 315976) Time Spent: 20m (was: 10m) > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 20m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work logged] (HDDS-2161) Create RepeatedKeyInfo structure to be saved in deletedTable
[ https://issues.apache.org/jira/browse/HDDS-2161?focusedWorklogId=315956=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-315956 ] ASF GitHub Bot logged work on HDDS-2161: Author: ASF GitHub Bot Created on: 20/Sep/19 22:07 Start Date: 20/Sep/19 22:07 Worklog Time Spent: 10m Work Description: dineshchitlangia commented on pull request #1491: HDDS-2161. Create RepeatedKeyInfo structure to be saved in deletedTable URL: https://github.com/apache/hadoop/pull/1491 /label ozone This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 315956) Remaining Estimate: 0h Time Spent: 10m > Create RepeatedKeyInfo structure to be saved in deletedTable > > > Key: HDDS-2161 > URL: https://issues.apache.org/jira/browse/HDDS-2161 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Dinesh Chitlangia >Assignee: Dinesh Chitlangia >Priority: Major > Labels: pull-request-available > Time Spent: 10m > Remaining Estimate: 0h > > Currently, OM Metadata deletedTable stores > When a user deletes a Key, is moved to deletedTable. > If a user creates and deletes key with exact same name in quick succession > repeatedly, then old can get overwritten and we may be > left with dangling blocks. > To address this, currently we append delete timestamp to keyname and preserve > the multiple delete attempts for same key name. > However, for GDPR compliance we need a way to check if a key is deleted from > deletedTable and thus given the above explanation, we may not get accurate > information and it must also confuse the users. > > This Jira aims to: > # Create new structure RepeatedKeyInfo which allows us to group multiple > KeyInfo which can be saved to deletedTable corresponding to a keyname as > > # Due to this, before we move a key to deletedTable, we need to check if key > with same name exists. If yes, then fetch the existing instance and add the > latest key to the list, store it back to deletedTable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org