[ 
https://issues.apache.org/jira/browse/PHOENIX-3161?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15583219#comment-15583219
 ] 

James Taylor commented on PHOENIX-3161:
---------------------------------------

I like this approach overall, [~an...@apache.org]. Nice work! Here's some 
specific feedback:
- Does the HBase copy constructor not maintain the rawScan and cacheBlocks 
values correctly? If not can you comment there and reference the HBase bug for 
those?
- Don't change the setTimeRange(Scan,long) method, but use the 
setTimeRange(Scan,long,long) method instead. I'm not sure all the callers would 
expect the min time range to come from the Scan passed in. Will that work for 
you?
{code}
@@ -142,6 +141,8 @@ public class ScanUtil {
             TreeMap<byte [], NavigableSet<byte []>> existingMap = 
(TreeMap<byte[], NavigableSet<byte[]>>)scan.getFamilyMap();
             Map<byte [], NavigableSet<byte []>> clonedMap = new TreeMap<byte 
[], NavigableSet<byte []>>(existingMap);
             newScan.setFamilyMap(clonedMap);
+            newScan.setRaw(scan.isRaw());
+            newScan.setCacheBlocks(scan.getCacheBlocks());
             // Carry over the reversed attribute
             newScan.setReversed(scan.isReversed());
             newScan.setSmall(scan.isSmall());
@@ -269,7 +270,7 @@ public class ScanUtil {
 
     public static void setTimeRange(Scan scan, long ts) {
         try {
-            scan.setTimeRange(MetaDataProtocol.MIN_TABLE_TIMESTAMP, ts);
+            scan.setTimeRange(scan.getTimeRange().getMin(), ts);
         } catch (IOException e) {
             throw new RuntimeException(e);
         }
{code}
- I don't think this fix is correct (and maybe you already made a related fixed 
that we should look at too, though perhaps I reworked that already?). We want 
minTimeStamp to be the smallest value of all the kv.getTimestamp() we see. It's 
basically "how far back in time do we need to query".
{code}
diff --git 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
index c29d7b4..eb73d6b 100644
--- 
a/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
+++ 
b/phoenix-core/src/main/java/org/apache/phoenix/index/PhoenixIndexFailurePolicy.java
@@ -129,7 +129,7 @@ public class PhoenixIndexFailurePolicy extends 
DelegateIndexFailurePolicy {
                 for (Mutation m : mutations) {
                     for (List<Cell> kvs : m.getFamilyCellMap().values()) {
                         for (Cell kv : kvs) {
-                            if (minTimeStamp == 0 || (kv.getTimestamp() >= 0 
&& minTimeStamp < kv.getTimestamp())) {
+                            if (minTimeStamp == 0 || (kv.getTimestamp() >= 0 
&& minTimeStamp > kv.getTimestamp())) {
                                 minTimeStamp = kv.getTimestamp();
                             }
                         }
{code}
- I thought that region.batchMutate() won't invoke coprocessors like 
htable.batch() does? Do existing tests pass with your patch? FWIW, we'd like to 
change the coprocessor code to use htable.batch() instead (see PHOENIX-3271) so 
that we can execute UPSERT SELECT on the server-side instead of bringing all 
the data back to the client-side, even if the source and target tables are 
different (if auto commit is on). Would love it if you could tackle that one 
next.

> Check possibility of moving rebuilding code to coprocessor of data table.
> -------------------------------------------------------------------------
>
>                 Key: PHOENIX-3161
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-3161
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Ankit Singhal
>            Assignee: Ankit Singhal
>             Fix For: 4.9.0
>
>         Attachments: PHOENIX-3161.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to