gjacoby126 commented on code in PR #1517:
URL: https://github.com/apache/phoenix/pull/1517#discussion_r991317265


##########
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java:
##########
@@ -262,7 +262,7 @@ public void 
preScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<R
             // last possible moment. You need to swap the start/stop and make 
the
             // start exclusive and the stop inclusive.
             ScanUtil.setupReverseScan(scan);
-            if (scan.getFilter() != null && !(scan.getFilter() instanceof 
PagedFilter)) {
+            if (!(scan.getFilter() instanceof PagedFilter)) {

Review Comment:
   Is there any problem with a PagedFilter having an internal Filter which is 
null?



##########
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/BaseScannerRegionObserver.java:
##########
@@ -262,7 +262,7 @@ public void 
preScannerOpen(org.apache.hadoop.hbase.coprocessor.ObserverContext<R
             // last possible moment. You need to swap the start/stop and make 
the
             // start exclusive and the stop inclusive.
             ScanUtil.setupReverseScan(scan);
-            if (scan.getFilter() != null && !(scan.getFilter() instanceof 
PagedFilter)) {
+            if (!(scan.getFilter() instanceof PagedFilter)) {

Review Comment:
   Could you explain more about this change? Was there a previous bug where 
Scans without a filter weren't being paged correctly?



##########
phoenix-core/src/it/java/org/apache/phoenix/end2end/PreMatureTimelyAbortScanIt.java:
##########
@@ -0,0 +1,98 @@
+package org.apache.phoenix.end2end;
+
+import org.apache.phoenix.coprocessor.BaseScannerRegionObserver;
+import org.apache.phoenix.exception.SQLExceptionCode;
+import org.apache.phoenix.query.QueryServices;
+import org.apache.phoenix.thirdparty.com.google.common.collect.Maps;
+import org.apache.phoenix.util.ReadOnlyProps;
+import org.junit.BeforeClass;
+import org.junit.Test;
+import org.junit.experimental.categories.Category;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.sql.Connection;
+import java.sql.DriverManager;
+import java.sql.ResultSet;
+import java.sql.SQLException;
+import java.util.Map;
+
+import static org.junit.Assert.*;
+@Category(ParallelStatsDisabledTest.class)
+public class PreMatureTimelyAbortScanIt extends ParallelStatsDisabledIT {
+    static final Logger LOG = 
LoggerFactory.getLogger(PreMatureTimelyAbortScanIt.class);
+
+    @BeforeClass
+    public static synchronized void doSetup() throws Exception {
+        Map<String, String> props = Maps.newHashMapWithExpectedSize(1);
+        props.put(BaseScannerRegionObserver.PHOENIX_MAX_LOOKBACK_AGE_CONF_KEY, 
Integer.toString(60*60)); // An hour
+        props.put(QueryServices.USE_STATS_FOR_PARALLELIZATION, 
Boolean.toString(false));
+        props.put(QueryServices.PHOENIX_SERVER_PAGE_SIZE_MS, 
Integer.toString(0));
+        setUpTestDriver(new ReadOnlyProps(props.entrySet().iterator()));
+    }
+
+    private String getUniqueUrl() {
+        return url + generateUniqueName();
+    }
+
+    @Test
+    public void testPreMatureScannerAbortForCount() throws Exception {
+
+        try (Connection conn = DriverManager.getConnection(getUniqueUrl())) {
+            conn.createStatement().execute("CREATE TABLE LONG_BUG (ID INTEGER 
PRIMARY KEY, AMOUNT DECIMAL)");
+        }
+        try (Connection conn = DriverManager.getConnection(getUniqueUrl())) {
+            for (int i = 0; i<500000 ; i++) {
+                int amount = -50000 + i;
+                String s = "UPSERT INTO LONG_BUG (ID, AMOUNT) VALUES( " + i + 
", " + amount + ")";
+                conn.createStatement().execute(s);
+                conn.commit();
+            }
+        }
+
+        try {
+            Connection conn = DriverManager.getConnection(getUniqueUrl());
+            Runnable runnable = new Runnable() {
+                @Override
+                public void run() {
+                    try {
+                        ResultSet resultSet = 
conn.createStatement().executeQuery(

Review Comment:
   Doesn't this thread need to loop or wait on the other thread to close the 
connection first? What if Thread t gets scheduled to run to completion (hence 
failure) before t1 does anything?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to