[jira] [Comment Edited] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Kadir OZDEMIR (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509130#comment-17509130
 ] 

Kadir OZDEMIR edited comment on PHOENIX-6671 at 3/19/22, 12:42 AM:
---

[~larsh],  the patch looks good to me. Would you please generate a PR just to 
trigger the pre-checkin tests (just to be sure this does not lead to some IT 
failures)? I think submitting a patch does not trigger these tests any more. 


was (Author: kozdemir):
[~larsh],  the patch looks good to me. Would you please you generate a PR just 
to trigger the pre-checkin tests (just to be sure this does not lead to some IT 
failures)? I think submitting a patch does not trigger these tests any more. 

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase 2, which does not have the async protobuf code, I could 
> hardly see any performance improvement from circumventing the RPC stack in 
> case the target of a Get or Scan is local. Even in the most ideal conditions 
> where everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Kadir OZDEMIR (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509130#comment-17509130
 ] 

Kadir OZDEMIR commented on PHOENIX-6671:


[~larsh],  the patch looks good to me. Would you please you generate a PR just 
to trigger the pre-checkin tests (just to be sure this does not lead to some IT 
failures)? I think submitting a patch does not trigger these tests any more. 

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase 2, which does not have the async protobuf code, I could 
> hardly see any performance improvement from circumventing the RPC stack in 
> case the target of a Get or Scan is local. Even in the most ideal conditions 
> where everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509105#comment-17509105
 ] 

Lars Hofhansl commented on PHOENIX-6671:


[~kozdemir], [~apurtell], FYI.

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase, which does not have the async protobuf code, I could hardly 
> see any performance improvement from circumventing the RPC stack in case the 
> target of a Get or Scan is local. Even in the most ideal conditions where 
> everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6671) Avoid ShortCirtuation Coprocessor Connection with HBase 2.x

2022-03-18 Thread Lars Hofhansl (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509104#comment-17509104
 ] 

Lars Hofhansl commented on PHOENIX-6671:


One line change. Just get a regular Connection.

> Avoid ShortCirtuation Coprocessor Connection with HBase 2.x
> ---
>
> Key: PHOENIX-6671
> URL: https://issues.apache.org/jira/browse/PHOENIX-6671
> Project: Phoenix
>  Issue Type: Bug
>Reporter: Lars Hofhansl
>Priority: Major
> Attachments: 6671-5.1.txt
>
>
> See PHOENIX-6501, PHOENIX-6458, and HBASE-26812.
> HBase's ShortCircuit Connection are fundamentally broken in HBase 2. We might 
> be able to fix it there, but with all the work the RPC handlers perform now 
> (closing scanning, resolving current user, etc), I doubt we'll get that 100% 
> right. HBase 3 has removed this functionality.
> Even with HBase, which does not have the async protobuf code, I could hardly 
> see any performance improvement from circumventing the RPC stack in case the 
> target of a Get or Scan is local. Even in the most ideal conditions where 
> everything is local, there was improvement outside of noise.
> I suggest we do not use ShortCircuited Connections in Phoenix 5+.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6669) RVC returns a wrong result

2022-03-18 Thread Xinyi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509077#comment-17509077
 ] 

Xinyi Yan commented on PHOENIX-6669:


Right. The above query returns 0 rows, which is correct. 

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Priority: Major
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6669) RVC returns a wrong result

2022-03-18 Thread Gokcen Iskender (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509018#comment-17509018
 ] 

Gokcen Iskender commented on PHOENIX-6669:
--

select PK2 from DUMMY where (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) 
<= ('xx', 5, 2) According to [~yanxinyi] 

> RVC returns a wrong result
> --
>
> Key: PHOENIX-6669
> URL: https://issues.apache.org/jira/browse/PHOENIX-6669
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.16.1
>Reporter: Xinyi Yan
>Priority: Major
>
> {code:java}
> CREATE TABLE IF NOT EXISTS DUMMY (
>     PK1 VARCHAR NOT NULL,
>     PK2 BIGINT NOT NULL,
>     PK3 BIGINT NOT NULL,
>     PK4 VARCHAR NOT NULL,
>     COL1 BIGINT,
>     COL2 INTEGER,
>     COL3 VARCHAR,
>     COL4 VARCHAR,    CONSTRAINT PK PRIMARY KEY
>     (
>         PK1,
>         PK2,
>         PK3,
>         PK4
>     )
> );UPSERT INTO DUMMY (PK1, PK4, COL1, PK2, COL2, PK3, COL3, COL4)
>             VALUES ('xx', 'xid1', 0, 7, 7, 7, 'INSERT', null);
>  {code}
> The non-RVC query returns no row, but the RVC query returns a wrong result.
> {code:java}
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where PK1 ='xx'
> . . . . . . . . . . . . .> and (PK1 > 'xx' AND PK1 <= 'xx')
> . . . . . . . . . . . . .> and (PK2 > 5 AND PK2 <=5)
> . . . . . . . . . . . . .> and (PK3 > 2 AND PK3 <=2);
> +--+
> |                   PK2                    |
> +--+
> +--+
>  No rows selected (0.022 seconds)
> 0: jdbc:phoenix:localhost> select PK2
> . . . . . . . . . . . . .> from DUMMY
> . . . . . . . . . . . . .> where (PK1 = 'xx')
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) > ('xx', 5, 2)
> . . . . . . . . . . . . .> and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+
> |                   PK2                    |
> +--+
> | 7                                        |
> +--+
> 1 row selected (0.033 seconds) {code}
> {code:java}
> 0: jdbc:phoenix:localhost> EXPLAIN select PK2 from DUMMY where (PK1 = 'xx') 
> and (PK1, PK2, PK3) > ('xx', 5, 2) and (PK1, PK2, PK3) <= ('xx', 5, 2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | CLIENT 1-CHUNK PARALLEL 1-WAY ROUND ROBIN RANGE SCAN OVER DUMMY ['xx'] | 
> null                                     | null          |
> |     SERVER FILTER BY FIRST KEY ONLY      | null                             
>         | null                                     |  |
> +--+--+--+--+
> 2 rows selected (0.024 seconds) 
> 0: jdbc:phoenix:localhost> explain select PK2 from DUMMY where PK1 ='xx' and 
> (PK1 > 'xx' AND PK1 <= 'xx') and (PK2 > 5 AND PK2 <=5) and (PK3 > 2 AND PK3 
> <=2);
> +--+--+--+--+
> |                   PLAN                   |              EST_BYTES_READ      
>         |              EST_ROWS_READ               |  |
> +--+--+--+--+
> | DEGENERATE SCAN OVER DUMMY               | null                             
>         | null                                     |  |
> +--+--+--+--+
> 1 row selected (0.015 seconds){code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6670) Optimize PhoenixKeyValueUtil#maybeCopyCell

2022-03-18 Thread Istvan Toth (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17509001#comment-17509001
 ] 

Istvan Toth commented on PHOENIX-6670:
--

>From what I learned while poring over the HBase codebase, the only reason to 
>copy a cell to a KV is to decouple the cell from the underlying ByteBuffer's 
>lifecycle. 

Otherwise AFAIK Cells and KVs are interchangeable. (save some perf 
optimizations that are used for off-heap cells)

I may be missing something, and there may be other reasons for doing that, but 
I can't think of any right now.

Maybe in early versions of the off-heap work all Cells were off-heap and the 
code was just never updated ?

I haven't actually tested this, and we may also need to update some other code 
in Phoenix to allow Cells instead of requiring KVs explicitly, but I wouldn't 
expect that to cause problems.

> Optimize PhoenixKeyValueUtil#maybeCopyCell
> --
>
> Key: PHOENIX-6670
> URL: https://issues.apache.org/jira/browse/PHOENIX-6670
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Priority: Major
>
> PhoenixKeyValueUtil#maybeCopyCell copies every cell that is not a KeyValue to 
> a keyValue.
> It's point is to copy Off-Heap cells to the Heap, so that the values are kept 
> after the backing ByteBuffer is freed, and we avoid use-after-free errors.
> However, checking if a Cell is a KeyValue instance is a poor indication for 
> that, as there are a lot of Cell types that are not KeyValues, but are stored 
> on the heap, and do not need to be copied.
> Copying only ByteBufferExtendedCell instances instead would potentially be a 
> significat performance gain.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508853#comment-17508853
 ] 

ASF GitHub Bot commented on PHOENIX-6632:
-

ashwinb1998 commented on pull request #69:
URL: 
https://github.com/apache/phoenix-connectors/pull/69#issuecomment-1072085140


   Sure @stoty, will do


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix-connectors] ashwinb1998 commented on pull request #69: PHOENIX-6632 Migrate/Update connectors to spark-3

2022-03-18 Thread GitBox


ashwinb1998 commented on pull request #69:
URL: 
https://github.com/apache/phoenix-connectors/pull/69#issuecomment-1072085140


   Sure @stoty, will do


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-3383) Comparison between descending row keys used in RVC is reverse

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-3383?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508842#comment-17508842
 ] 

ASF GitHub Bot commented on PHOENIX-3383:
-

gokceni opened a new pull request #1404:
URL: https://github.com/apache/phoenix/pull/1404


   …VC is reverse"
   
   This reverts commit aee568beb02cdf983bb10889902c338ea016e6c9.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Comparison between descending row keys used in RVC is reverse
> -
>
> Key: PHOENIX-3383
> URL: https://issues.apache.org/jira/browse/PHOENIX-3383
> Project: Phoenix
>  Issue Type: Bug
>Reporter: James R. Taylor
>Assignee: James R. Taylor
>Priority: Major
>  Labels: DESC
> Fix For: 4.15.0, 5.1.0
>
> Attachments: PHOENIX-3383-wip1.patch, PHOENIX-3383-wip5.patch, 
> PHOENIX-3383-wip6.patch, PHOENIX-3383-wip7.patch, PHOENIX-3383_v1.patch, 
> PHOENIX-3383_v10.patch, PHOENIX-3383_v11.patch, PHOENIX-3383_v12.patch, 
> PHOENIX-3383_v13.patch, PHOENIX-3383_v2.patch, PHOENIX-3383_v3.patch, 
> PHOENIX-3383_v4.patch, PHOENIX-3383_v5.patch, PHOENIX-3383_v6.patch, 
> PHOENIX-3383_v7.patch, PHOENIX-3383_v8.patch, PHOENIX-3383_v9.patch, 
> PHOENIX-3383_wip.patch, PHOENIX-3383_wip2.patch, PHOENIX-3383_wip3.patch
>
>
> See PHOENIX-3382, but the comparison for RVC with descending row key columns 
> is the reverse of what it should be.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix] kadirozde merged pull request #1403: PHOENIX-6663 Use batching when joining data table rows with uncovered…

2022-03-18 Thread GitBox


kadirozde merged pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [phoenix] lhofhansl commented on a change in pull request #1403: PHOENIX-6663 Use batching when joining data table rows with uncovered…

2022-03-18 Thread GitBox


lhofhansl commented on a change in pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403#discussion_r829436375



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java
##
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.execute.TupleProjector;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.thirdparty.com.google.common.collect.Maps;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.ScanUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.phoenix.query.QueryServices.INDEX_PAGE_SIZE_IN_ROWS;
+import static org.apache.phoenix.util.ScanUtil.getDummyResult;
+import static org.apache.phoenix.util.ScanUtil.isDummy;
+
+public abstract class UncoveredIndexRegionScanner extends BaseRegionScanner {
+private static final Logger LOGGER =
+LoggerFactory.getLogger(UncoveredIndexRegionScanner.class);
+/**
+ * The states of the processing a page of index rows
+ */
+protected enum State {
+INITIAL, SCANNING_INDEX, SCANNING_DATA, SCANNING_DATA_INTERRUPTED, 
READY
+}
+protected State state = State.INITIAL;
+protected final byte[][] viewConstants;
+protected final RegionCoprocessorEnvironment env;
+protected byte[][] regionEndKeys;
+protected final int pageSizeInRows;
+protected final Scan scan;
+protected final Scan dataTableScan;
+protected final RegionScanner innerScanner;
+protected final Region region;
+protected final IndexMaintainer indexMaintainer;
+protected final TupleProjector tupleProjector;
+protected final ImmutableBytesWritable ptr;
+protected String exceptionMessage;
+protected List> indexRows = null;
+protected Map dataRows = null;
+protected Iterator> indexRowIterator = null;
+protected Map indexToDataRowKeyMap = null;
+protected int indexRowCount = 0;
+protected final long pageSizeMs;
+protected byte[] lastIndexRowKey = null;
+
+public UncoveredIndexRegionScanner(final RegionScanner innerScanner,
+ final Region region,
+ final Scan scan,
+ final 
RegionCoprocessorEnvironment env,
+ final Scan dataTableScan,
+ final TupleProjector 
tupleProjector,
+ final IndexMaintainer 
indexMaintainer,
+ final byte[][] viewConstants,
+ final ImmutableBytesWritable ptr,
+ final long pageSizeMs) {
+super(innerScanner);
+final Configuration config = env.getConfiguration();
+
+byte[] pageSizeFromScan =
+

[jira] [Commented] (PHOENIX-6663) Use batching when joining data table rows with uncovered local index rows

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508840#comment-17508840
 ] 

ASF GitHub Bot commented on PHOENIX-6663:
-

kadirozde merged pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Use batching when joining data table rows with uncovered local index rows
> -
>
> Key: PHOENIX-6663
> URL: https://issues.apache.org/jira/browse/PHOENIX-6663
> Project: Phoenix
>  Issue Type: Improvement
>Affects Versions: 4.16.1, 5.1.2
>Reporter: Kadir OZDEMIR
>Assignee: Kadir OZDEMIR
>Priority: Major
> Fix For: 4.17.0, 5.2.0, 5.1.3
>
>
> The current solution uses HBase get operations to join data table rows with 
> uncovered local index rows on the server side. Issuing a separate get 
> operation for every data table row can be expensive. Instead, we can buffer 
> lots of data row keys in memory and use a scan with skip scan filter. This 
> will reduce the cost of join and also improve the performance.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6663) Use batching when joining data table rows with uncovered local index rows

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508830#comment-17508830
 ] 

ASF GitHub Bot commented on PHOENIX-6663:
-

kadirozde commented on a change in pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403#discussion_r829577123



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java
##
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.execute.TupleProjector;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.thirdparty.com.google.common.collect.Maps;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.ScanUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.phoenix.query.QueryServices.INDEX_PAGE_SIZE_IN_ROWS;
+import static org.apache.phoenix.util.ScanUtil.getDummyResult;
+import static org.apache.phoenix.util.ScanUtil.isDummy;
+
+public abstract class UncoveredIndexRegionScanner extends BaseRegionScanner {
+private static final Logger LOGGER =
+LoggerFactory.getLogger(UncoveredIndexRegionScanner.class);
+/**
+ * The states of the processing a page of index rows
+ */
+protected enum State {
+INITIAL, SCANNING_INDEX, SCANNING_DATA, SCANNING_DATA_INTERRUPTED, 
READY
+}
+protected State state = State.INITIAL;
+protected final byte[][] viewConstants;
+protected final RegionCoprocessorEnvironment env;
+protected byte[][] regionEndKeys;
+protected final int pageSizeInRows;
+protected final Scan scan;
+protected final Scan dataTableScan;
+protected final RegionScanner innerScanner;
+protected final Region region;
+protected final IndexMaintainer indexMaintainer;
+protected final TupleProjector tupleProjector;
+protected final ImmutableBytesWritable ptr;
+protected String exceptionMessage;
+protected List> indexRows = null;
+protected Map dataRows = null;
+protected Iterator> indexRowIterator = null;
+protected Map indexToDataRowKeyMap = null;
+protected int indexRowCount = 0;
+protected final long pageSizeMs;
+protected byte[] lastIndexRowKey = null;
+
+public UncoveredIndexRegionScanner(final RegionScanner innerScanner,
+ final Region region,
+ final Scan scan,
+ final 
RegionCoprocessorEnvironment env,
+ final Scan dataTableScan,
+ final TupleProjector 
tupleProjector,
+ final IndexMaintainer 
indexMaintainer,
+ final byte[][] viewConstants,
+ final ImmutableBytesWritable ptr,
+ 

[jira] [Commented] (PHOENIX-6663) Use batching when joining data table rows with uncovered local index rows

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6663?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508834#comment-17508834
 ] 

ASF GitHub Bot commented on PHOENIX-6663:
-

lhofhansl commented on a change in pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403#discussion_r829436375



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java
##
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.execute.TupleProjector;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.thirdparty.com.google.common.collect.Maps;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.ScanUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.phoenix.query.QueryServices.INDEX_PAGE_SIZE_IN_ROWS;
+import static org.apache.phoenix.util.ScanUtil.getDummyResult;
+import static org.apache.phoenix.util.ScanUtil.isDummy;
+
+public abstract class UncoveredIndexRegionScanner extends BaseRegionScanner {
+private static final Logger LOGGER =
+LoggerFactory.getLogger(UncoveredIndexRegionScanner.class);
+/**
+ * The states of the processing a page of index rows
+ */
+protected enum State {
+INITIAL, SCANNING_INDEX, SCANNING_DATA, SCANNING_DATA_INTERRUPTED, 
READY
+}
+protected State state = State.INITIAL;
+protected final byte[][] viewConstants;
+protected final RegionCoprocessorEnvironment env;
+protected byte[][] regionEndKeys;
+protected final int pageSizeInRows;
+protected final Scan scan;
+protected final Scan dataTableScan;
+protected final RegionScanner innerScanner;
+protected final Region region;
+protected final IndexMaintainer indexMaintainer;
+protected final TupleProjector tupleProjector;
+protected final ImmutableBytesWritable ptr;
+protected String exceptionMessage;
+protected List> indexRows = null;
+protected Map dataRows = null;
+protected Iterator> indexRowIterator = null;
+protected Map indexToDataRowKeyMap = null;
+protected int indexRowCount = 0;
+protected final long pageSizeMs;
+protected byte[] lastIndexRowKey = null;
+
+public UncoveredIndexRegionScanner(final RegionScanner innerScanner,
+ final Region region,
+ final Scan scan,
+ final 
RegionCoprocessorEnvironment env,
+ final Scan dataTableScan,
+ final TupleProjector 
tupleProjector,
+ final IndexMaintainer 
indexMaintainer,
+ final byte[][] viewConstants,
+ final ImmutableBytesWritable ptr,
+ 

[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508831#comment-17508831
 ] 

ASF GitHub Bot commented on PHOENIX-6632:
-

ashwinb1998 closed pull request #69:
URL: https://github.com/apache/phoenix-connectors/pull/69


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix-connectors] ashwinb1998 closed pull request #69: PHOENIX-6632 Migrate/Update connectors to spark-3

2022-03-18 Thread GitBox


ashwinb1998 closed pull request #69:
URL: https://github.com/apache/phoenix-connectors/pull/69


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [phoenix] kadirozde commented on a change in pull request #1403: PHOENIX-6663 Use batching when joining data table rows with uncovered…

2022-03-18 Thread GitBox


kadirozde commented on a change in pull request #1403:
URL: https://github.com/apache/phoenix/pull/1403#discussion_r829577123



##
File path: 
phoenix-core/src/main/java/org/apache/phoenix/coprocessor/UncoveredIndexRegionScanner.java
##
@@ -0,0 +1,325 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.phoenix.coprocessor;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.hbase.Cell;
+import org.apache.hadoop.hbase.CellUtil;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.client.Result;
+import org.apache.hadoop.hbase.client.Scan;
+import org.apache.hadoop.hbase.coprocessor.RegionCoprocessorEnvironment;
+import org.apache.hadoop.hbase.io.ImmutableBytesWritable;
+import org.apache.hadoop.hbase.regionserver.Region;
+import org.apache.hadoop.hbase.regionserver.RegionScanner;
+import org.apache.hadoop.hbase.util.Bytes;
+import org.apache.phoenix.compile.ScanRanges;
+import org.apache.phoenix.execute.TupleProjector;
+import org.apache.phoenix.filter.SkipScanFilter;
+import org.apache.phoenix.hbase.index.util.ImmutableBytesPtr;
+import org.apache.phoenix.index.IndexMaintainer;
+import org.apache.phoenix.query.KeyRange;
+import org.apache.phoenix.query.QueryServicesOptions;
+import org.apache.phoenix.schema.tuple.ResultTuple;
+import org.apache.phoenix.schema.types.PVarbinary;
+import org.apache.phoenix.thirdparty.com.google.common.collect.Maps;
+import org.apache.phoenix.util.EnvironmentEdgeManager;
+import org.apache.phoenix.util.IndexUtil;
+import org.apache.phoenix.util.ScanUtil;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Iterator;
+import java.util.List;
+import java.util.Map;
+
+import static org.apache.phoenix.query.QueryServices.INDEX_PAGE_SIZE_IN_ROWS;
+import static org.apache.phoenix.util.ScanUtil.getDummyResult;
+import static org.apache.phoenix.util.ScanUtil.isDummy;
+
+public abstract class UncoveredIndexRegionScanner extends BaseRegionScanner {
+private static final Logger LOGGER =
+LoggerFactory.getLogger(UncoveredIndexRegionScanner.class);
+/**
+ * The states of the processing a page of index rows
+ */
+protected enum State {
+INITIAL, SCANNING_INDEX, SCANNING_DATA, SCANNING_DATA_INTERRUPTED, 
READY
+}
+protected State state = State.INITIAL;
+protected final byte[][] viewConstants;
+protected final RegionCoprocessorEnvironment env;
+protected byte[][] regionEndKeys;
+protected final int pageSizeInRows;
+protected final Scan scan;
+protected final Scan dataTableScan;
+protected final RegionScanner innerScanner;
+protected final Region region;
+protected final IndexMaintainer indexMaintainer;
+protected final TupleProjector tupleProjector;
+protected final ImmutableBytesWritable ptr;
+protected String exceptionMessage;
+protected List> indexRows = null;
+protected Map dataRows = null;
+protected Iterator> indexRowIterator = null;
+protected Map indexToDataRowKeyMap = null;
+protected int indexRowCount = 0;
+protected final long pageSizeMs;
+protected byte[] lastIndexRowKey = null;
+
+public UncoveredIndexRegionScanner(final RegionScanner innerScanner,
+ final Region region,
+ final Scan scan,
+ final 
RegionCoprocessorEnvironment env,
+ final Scan dataTableScan,
+ final TupleProjector 
tupleProjector,
+ final IndexMaintainer 
indexMaintainer,
+ final byte[][] viewConstants,
+ final ImmutableBytesWritable ptr,
+ final long pageSizeMs) {
+super(innerScanner);
+final Configuration config = env.getConfiguration();
+
+byte[] pageSizeFromScan =
+

[jira] [Commented] (PHOENIX-6670) Optimize PhoenixKeyValueUtil#maybeCopyCell

2022-03-18 Thread Anoop Sam John (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508794#comment-17508794
 ] 

Anoop Sam John commented on PHOENIX-6670:
-

PhoenixKeyValueUtil  intent was to make sure all Cell instances are KeyValue 
objects?  It was this way initially.  (Atleast phoenix was using HBase's KVUtil 
in past and its intent was to use KV type).

> Optimize PhoenixKeyValueUtil#maybeCopyCell
> --
>
> Key: PHOENIX-6670
> URL: https://issues.apache.org/jira/browse/PHOENIX-6670
> Project: Phoenix
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 5.2.0
>Reporter: Istvan Toth
>Priority: Major
>
> PhoenixKeyValueUtil#maybeCopyCell copies every cell that is not a KeyValue to 
> a keyValue.
> It's point is to copy Off-Heap cells to the Heap, so that the values are kept 
> after the backing ByteBuffer is freed, and we avoid use-after-free errors.
> However, checking if a Cell is a KeyValue instance is a poor indication for 
> that, as there are a lot of Cell types that are not KeyValues, but are stored 
> on the heap, and do not need to be copied.
> Copying only ByteBufferExtendedCell instances instead would potentially be a 
> significat performance gain.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix-connectors] ashwinb1998 closed pull request #69: PHOENIX-6632 Migrate/Update connectors to spark-3

2022-03-18 Thread GitBox


ashwinb1998 closed pull request #69:
URL: https://github.com/apache/phoenix-connectors/pull/69


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508605#comment-17508605
 ] 

ASF GitHub Bot commented on PHOENIX-6632:
-

ashwinb1998 closed pull request #69:
URL: https://github.com/apache/phoenix-connectors/pull/69


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Commented] (PHOENIX-6632) Migrate connectors to Spark-3

2022-03-18 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/PHOENIX-6632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17508603#comment-17508603
 ] 

ASF GitHub Bot commented on PHOENIX-6632:
-

ashwinb1998 commented on pull request #69:
URL: 
https://github.com/apache/phoenix-connectors/pull/69#issuecomment-1072085140


   Sure @stoty, will do


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> Migrate connectors to Spark-3
> -
>
> Key: PHOENIX-6632
> URL: https://issues.apache.org/jira/browse/PHOENIX-6632
> Project: Phoenix
>  Issue Type: Improvement
>  Components: spark-connector
>Affects Versions: connectors-6.0.0
>Reporter: Ashwin Balasubramani
>Assignee: Istvan Toth
>Priority: Major
>
> With Spark-3, the DatasourceV2 API has had major changes, where a new 
> TableProvider Interface has been introduced. These new changes bring in more 
> control to the data source developer and better integration with 
> spark-optimizer.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [phoenix-connectors] ashwinb1998 commented on pull request #69: PHOENIX-6632 Migrate/Update connectors to spark-3

2022-03-18 Thread GitBox


ashwinb1998 commented on pull request #69:
URL: 
https://github.com/apache/phoenix-connectors/pull/69#issuecomment-1072085140


   Sure @stoty, will do


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscr...@phoenix.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org