[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17069097#comment-17069097 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12998042/PHOENIX-5791.4.x-HBase-1.5.002.patch against 4.x-HBase-1.5 branch at commit d04ba4eb945ba502652888220097ebca44f4a4dd. ATTACHMENT ID: 12998042 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +private long verifyIndexTable(String tableName, String indexName, Connection conn) throws Exception { +// Now we rebuild the entire index table and expect that it is still good after the full rebuild +long actualRowCountAfterCompaction = IndexScrutiny.scrutinizeIndex(conn, tableName, indexName); ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); ++ (RAND.nextBoolean() ? null : (RAND.nextInt() % nIndexValues)) + ", " ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); +"UPSERT INTO " + tableName + " (k1, k2, b.v2, c.v3, d.v4) VALUES (" ++ (RAND.nextBoolean() ? null : RAND.nextInt()) + ", " {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3665//testReport/ Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3665//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch, > PHOENIX-5791.4.x-HBase-1.5.002.patch > > Time Spent: 5h 40m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done but the last update phase is skipped. This means the > data table row will be updated by these updates but the corresponding index > table rows will be left with the unverified status. Then, the read repair > process will repair these unverified index rows during scans. > Since expected index mutations are derived from the data table row after > these
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067388#comment-17067388 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997778/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 14a17d8f9b23089bea4c2a910430a73c669bc0fb. ATTACHMENT ID: 12997778 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +private long verifyIndexTable(String tableName, String indexName, Connection conn) throws Exception { +// Now we rebuild the entire index table and expect that it is still good after the full rebuild +long actualRowCountAfterCompaction = IndexScrutiny.scrutinizeIndex(conn, tableName, indexName); ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); ++ (RAND.nextBoolean() ? null : (RAND.nextInt() % nIndexValues)) + ", " ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); +"UPSERT INTO " + tableName + " (k1, k2, b.v2, c.v3, d.v4) VALUES (" ++ (RAND.nextBoolean() ? null : RAND.nextInt()) + ", " {color:green}+1 core tests{color}. The patch passed unit tests in . Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3662//testReport/ Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3662//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 3h > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done but the last update phase is skipped. This means the > data table row will be updated by these updates but the corresponding index > table rows will be left with the unverified status. Then, the read repair > process will repair these unverified index rows during scans. > Since expected index mutations are derived from the data table row after > these concurrent mutations are applied, the expected
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067373#comment-17067373 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997778/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 14a17d8f9b23089bea4c2a910430a73c669bc0fb. ATTACHMENT ID: 12997778 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +private long verifyIndexTable(String tableName, String indexName, Connection conn) throws Exception { +// Now we rebuild the entire index table and expect that it is still good after the full rebuild +long actualRowCountAfterCompaction = IndexScrutiny.scrutinizeIndex(conn, tableName, indexName); ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); ++ (RAND.nextBoolean() ? null : (RAND.nextInt() % nIndexValues)) + ", " ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); +"UPSERT INTO " + tableName + " (k1, k2, b.v2, c.v3, d.v4) VALUES (" ++ (RAND.nextBoolean() ? null : RAND.nextInt()) + ", " {color:red}-1 core tests{color}. The patch failed these unit tests: ./phoenix-core/target/failsafe-reports/TEST-org.apache.phoenix.end2end.ConcurrentMutationsExtendedIT Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3661//testReport/ Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3661//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done but the last update phase is skipped. This means the > data table row will be updated by these updates but the corresponding index > table rows will be left with the unverified status. Then, the read repair > process will repair these unverified index rows during scans. > Since expected index
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067267#comment-17067267 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997767/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 14a17d8f9b23089bea4c2a910430a73c669bc0fb. ATTACHMENT ID: 12997767 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:green}+1 release audit{color}. The applied patch does not increase the total number of release audit warnings. {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +private long verifyIndexTable(String tableName, String indexName, Connection conn) throws Exception { +// Now we rebuild the entire index table and expect that it is still good after the full rebuild +long actualRowCountAfterCompaction = IndexScrutiny.scrutinizeIndex(conn, tableName, indexName); ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); ++ (RAND.nextBoolean() ? null : (RAND.nextInt() % nIndexValues)) + ", " ++ "(k1 INTEGER NOT NULL, k2 INTEGER NOT NULL, a.v1 INTEGER, b.v2 INTEGER, c.v3 INTEGER, d.v4 INTEGER," + +conn.createStatement().execute("CREATE INDEX " + indexName + " ON " + tableName + "(v1) INCLUDE(v2, v3)"); +"UPSERT INTO " + tableName + " (k1, k2, b.v2, c.v3, d.v4) VALUES (" ++ (RAND.nextBoolean() ? null : RAND.nextInt()) + ", " {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.phoenix.index.VerifySingleIndexRowTest Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3660//testReport/ Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3660//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done but the last update phase is skipped. This means the > data table row will be updated by these updates but the corresponding index > table rows will be left with the unverified status. Then, the read repair > process will repair these unverified index rows during scans. > Since expected index mutations are derived from the
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067248#comment-17067248 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997762/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 61589a903f8c5176ce46e5af0a83729f4f4c90ec. ATTACHMENT ID: 12997762 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings (more than the master's current 0 warnings). {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java +- org.apache.hadoop.hbase.TableName.valueOf(pIndexTable.getPhysicalName().getBytes()); +diff --git a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java ++ * IndexMaintainer.getIndexedColumns() returns the data column references for indexed columns. The data columns are ++ * grouped into three classes, pk columns (data table pk columns), the indexed columns (the columns for which ++ * we want to have indexing; they form the prefix for the primary key for the index table (after salt and tenant id)) ++ * and covered columns. The purpose of this method is to find out if all the indexed columns are included in the + private boolean hasAllIndexedColumns(IndexMaintainer indexMaintainer, MultiMutation multiMutation) { +-Bytes.compareTo(CellUtil.cloneQualifier(cell), columnReference.getQualifier() ) == 0) { +- BatchMutateContext context, long now, PhoenixIndexMetaData indexMetaData) {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.phoenix.index.VerifySingleIndexRowTest Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3659//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3659//artifact/patchprocess/patchReleaseAuditWarnings.txt Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3659//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 2h 10m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067247#comment-17067247 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997762/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 61589a903f8c5176ce46e5af0a83729f4f4c90ec. ATTACHMENT ID: 12997762 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 25 new or modified tests. {color:green}+1 javac{color}. The applied patch does not increase the total number of javac compiler warnings. {color:red}-1 release audit{color}. The applied patch generated 1 release audit warnings (more than the master's current 0 warnings). {color:red}-1 lineLengths{color}. The patch introduces the following lines longer than 100: +diff --git a/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java b/phoenix-core/src/it/java/org/apache/phoenix/end2end/ConcurrentMutationsExtendedIT.java +- org.apache.hadoop.hbase.TableName.valueOf(pIndexTable.getPhysicalName().getBytes()); +diff --git a/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java b/phoenix-core/src/main/java/org/apache/phoenix/hbase/index/IndexRegionObserver.java ++ * IndexMaintainer.getIndexedColumns() returns the data column references for indexed columns. The data columns are ++ * grouped into three classes, pk columns (data table pk columns), the indexed columns (the columns for which ++ * we want to have indexing; they form the prefix for the primary key for the index table (after salt and tenant id)) ++ * and covered columns. The purpose of this method is to find out if all the indexed columns are included in the + private boolean hasAllIndexedColumns(IndexMaintainer indexMaintainer, MultiMutation multiMutation) { +-Bytes.compareTo(CellUtil.cloneQualifier(cell), columnReference.getQualifier() ) == 0) { +- BatchMutateContext context, long now, PhoenixIndexMetaData indexMetaData) {color:red}-1 core tests{color}. The patch failed these unit tests: org.apache.phoenix.index.VerifySingleIndexRowTest Test results: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3658//testReport/ Release audit warnings: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3658//artifact/patchprocess/patchReleaseAuditWarnings.txt Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3658//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 2h 10m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done
[jira] [Commented] (PHOENIX-5791) Eliminate false invalid row detection due to concurrent updates
[ https://issues.apache.org/jira/browse/PHOENIX-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17067212#comment-17067212 ] Hadoop QA commented on PHOENIX-5791: {color:red}-1 overall{color}. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12997761/PHOENIX-5791.4.x-HBase-1.5.001.patch against 4.x-HBase-1.5 branch at commit 61589a903f8c5176ce46e5af0a83729f4f4c90ec. ATTACHMENT ID: 12997761 {color:green}+1 @author{color}. The patch does not contain any @author tags. {color:green}+1 tests included{color}. The patch appears to include 1 new or modified tests. {color:red}-1 patch{color}. The patch command could not apply the patch. Console output: https://builds.apache.org/job/PreCommit-PHOENIX-Build/3656//console This message is automatically generated. > Eliminate false invalid row detection due to concurrent updates > > > Key: PHOENIX-5791 > URL: https://issues.apache.org/jira/browse/PHOENIX-5791 > Project: Phoenix > Issue Type: Sub-task >Reporter: Kadir OZDEMIR >Assignee: Kadir OZDEMIR >Priority: Major > Attachments: PHOENIX-5791.4.x-HBase-1.5.001.patch > > Time Spent: 1h 40m > Remaining Estimate: 0h > > IndexTool verification generates an expected list of index mutations from the > data table rows and uses this list to check if index table rows are > consistent with the data table. To do that it follows the following steps: > # The data table rows are scanned with a raw scan. This raw scan is > configured to read all versions of rows. > # For each scanned row, the cells that are scanned are grouped into two > sets: put and delete. The put set is the set of put cells and the delete set > is the set of delete cells. > # The put and delete sets for a given row are further grouped based on their > timestamps into put and delete mutations such that all the cells in a > mutation have the timestamp. > # The put and delete mutations are then sorted within a single list. > Mutations in this list are sorted in ascending order of their timestamp. > The above process assumes that for each data table update, the index table > will be updated with the correct index row key. However, this assumption does > not hold in the presence of concurrent updates. > From the consistent indexing design (PHOENIX-5156) perspective, two or more > pending updates from different batches on the same data row are concurrent if > and only if for all of these updates the data table row state is read from > HBase under the row lock and for none of them the row lock has been acquired > the second time for updating the data table. In other words, all of them are > in the first update phase concurrently. For concurrent updates, the first two > update phases are done but the last update phase is skipped. This means the > data table row will be updated by these updates but the corresponding index > table rows will be left with the unverified status. Then, the read repair > process will repair these unverified index rows during scans. > Since expected index mutations are derived from the data table row after > these concurrent mutations are applied, the expected list would not match > with the actual list of index mutations. > -- This message was sent by Atlassian Jira (v8.3.4#803005)