[GitHub] [phoenix] kadirozde commented on a change in pull request #679: PHOENIX-5645 - BaseScannerRegionObserver should prevent compaction from purg…

2020-01-14 Thread GitBox
kadirozde commented on a change in pull request #679: PHOENIX-5645 - 
BaseScannerRegionObserver should prevent compaction from purg…
URL: https://github.com/apache/phoenix/pull/679#discussion_r366627276
 
 

 ##
 File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MaxLookbackIT.java
 ##
 @@ -213,62 +241,49 @@ public void testRecentMaxVersionsNotCompactedAway() 
throws Exception {
 String thirdValue = "ghi";
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 String dataTableName = generateUniqueName();
-String indexStem = generateUniqueName();
-createTableAndIndexes(conn, dataTableName, indexStem, versions);
-long afterInsertSCN = 
org.apache.phoenix.util.EnvironmentEdgeManager.currentTimeMillis();
+createTable(dataTableName);
+//increment by 10 min to make sure we don't "look back" past table 
creation
+injectEdge.incValue(WAIT_AFTER_TABLE_CREATION);
+populateTable(dataTableName);
+injectEdge.incValue(1); //increment by 1 so we can see our write
+long afterInsertSCN = EnvironmentEdgeManager.currentTimeMillis();
 //make sure table and index metadata is set up right for versions
 TableName dataTable = TableName.valueOf(dataTableName);
 assertTableHasVersions(conn, dataTable, versions);
-String fullIndexName = indexStem + "1";
-TableName indexTable = TableName.valueOf(fullIndexName);
 
 Review comment:
   I suggest including indexes back and applying the same operations/checks to 
the index tables along with their data tables.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [phoenix] kadirozde commented on a change in pull request #679: PHOENIX-5645 - BaseScannerRegionObserver should prevent compaction from purg…

2020-01-14 Thread GitBox
kadirozde commented on a change in pull request #679: PHOENIX-5645 - 
BaseScannerRegionObserver should prevent compaction from purg…
URL: https://github.com/apache/phoenix/pull/679#discussion_r366622507
 
 

 ##
 File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MaxLookbackIT.java
 ##
 @@ -213,62 +241,49 @@ public void testRecentMaxVersionsNotCompactedAway() 
throws Exception {
 String thirdValue = "ghi";
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 String dataTableName = generateUniqueName();
-String indexStem = generateUniqueName();
-createTableAndIndexes(conn, dataTableName, indexStem, versions);
-long afterInsertSCN = 
org.apache.phoenix.util.EnvironmentEdgeManager.currentTimeMillis();
+createTable(dataTableName);
+//increment by 10 min to make sure we don't "look back" past table 
creation
+injectEdge.incValue(WAIT_AFTER_TABLE_CREATION);
+populateTable(dataTableName);
+injectEdge.incValue(1); //increment by 1 so we can see our write
+long afterInsertSCN = EnvironmentEdgeManager.currentTimeMillis();
 //make sure table and index metadata is set up right for versions
 TableName dataTable = TableName.valueOf(dataTableName);
 assertTableHasVersions(conn, dataTable, versions);
-String fullIndexName = indexStem + "1";
-TableName indexTable = TableName.valueOf(fullIndexName);
 
 Review comment:
   What was the reason for eliminating index tables from the tests? Was not 
this JIRA originally about index tables?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [phoenix] kadirozde commented on a change in pull request #679: PHOENIX-5645 - BaseScannerRegionObserver should prevent compaction from purg…

2020-01-14 Thread GitBox
kadirozde commented on a change in pull request #679: PHOENIX-5645 - 
BaseScannerRegionObserver should prevent compaction from purg…
URL: https://github.com/apache/phoenix/pull/679#discussion_r366620077
 
 

 ##
 File path: 
phoenix-core/src/it/java/org/apache/phoenix/end2end/MaxLookbackIT.java
 ##
 @@ -168,36 +184,48 @@ public void testTTLAndMaxLookbackAge() throws Exception {
 conf.setLong(HRegion.MEMSTORE_PERIODIC_FLUSH_INTERVAL, 0L);
 try (Connection conn = DriverManager.getConnection(getUrl())) {
 String dataTableName = generateUniqueName();
-String indexStem = generateUniqueName();
-createTableAndIndexes(conn, dataTableName, indexStem);
-long afterFirstInsertSCN = 
org.apache.phoenix.util.EnvironmentEdgeManager.currentTimeMillis();
+createTable(dataTableName);
+//increment by 10 min to make sure we don't "look back" past table 
creation
+injectEdge.incValue(WAIT_AFTER_TABLE_CREATION);
+populateTable(dataTableName);
+injectEdge.incValue(1);
+long afterFirstInsertSCN = 
EnvironmentEdgeManager.currentTimeMillis();
 TableName dataTable = TableName.valueOf(dataTableName);
 assertTableHasTtl(conn, dataTable, ttl);
-String fullIndexName = indexStem + "1";
-TableName indexTable = TableName.valueOf(fullIndexName);
-assertTableHasTtl(conn, indexTable, ttl);
-
 //first make sure we inserted correctly
-String sql = String.format("SELECT val2 FROM %s WHERE val1 = 
'ab'", dataTableName);
-assertExplainPlan(conn, sql, dataTableName, fullIndexName);
+String sql = String.format("SELECT val2 FROM %s WHERE id = 'a'", 
dataTableName);
+  //  assertExplainPlan(conn, sql, dataTableName, fullIndexName);
 assertRowExistsAtSCN(getUrl(),sql, afterFirstInsertSCN, true);
 int originalRowCount = 2;
-assertRawRowCount(conn, indexTable, originalRowCount);
+assertRawRowCount(conn, dataTable, originalRowCount);
 //force a flush
-flush(indexTable);
+flush(dataTable);
 //flush shouldn't have changed it
-assertRawRowCount(conn, indexTable, originalRowCount);
+assertRawRowCount(conn, dataTable, originalRowCount);
+  // assertExplainPlan(conn, sql, dataTableName, 
fullIndexName);
+long timeToSleep = (MAX_LOOKBACK_AGE * 1000) -
+(EnvironmentEdgeManager.currentTimeMillis() - 
afterFirstInsertSCN);
+if (timeToSleep > 0) {
+injectEdge.incValue(timeToSleep);
+//Thread.sleep(timeToSleep);
+}
+//make sure it's still on disk
+assertRawRowCount(conn, dataTable, originalRowCount);
+injectEdge.incValue(1); //get a new timestamp for compaction
+majorCompact(dataTable, 
EnvironmentEdgeManager.currentTimeMillis());
+//nothing should have been purged by this major compaction
+assertRawRowCount(conn, dataTable, originalRowCount);
 //now wait the TTL
-Thread.sleep((ttl +1) * 1000);
-long afterTTLExpiresSCN = 
org.apache.phoenix.util.EnvironmentEdgeManager.currentTimeMillis();
-assertExplainPlan(conn, sql, dataTableName, fullIndexName);
-//make sure we can't see it after expiration from masking
-assertRowExistsAtSCN(getUrl(), sql, afterTTLExpiresSCN, false);
-//but it's still on disk
-assertRawRowCount(conn, indexTable, originalRowCount);
-long beforeMajorCompactSCN = 
org.apache.phoenix.util.EnvironmentEdgeManager.currentTimeMillis();
-majorCompact(indexTable, beforeMajorCompactSCN);
-assertRawRowCount(conn, indexTable, 0);
+timeToSleep = (ttl * 1000) -
 
 Review comment:
   nit : timeToAdvance sounds better than timeToSleep to me


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services