Hi Fabio,
I have the very same requiremnt in my enviroment . My way is to collect
all the data you need to delete and save them into one hbase table and then
issue the delete statment . following is the sample code , hope this can
help.
Hi I'm trying some test with spark 2.0 together with phoenix 4.8 . My
enviroment is HDP 2.5 , I installed phoenix 4.8 by myself.
I got everything working perfectly under spark 1.6.2
>>> df = sqlContext.read \
... .format("org.apache.phoenix.spark") \
... .option("table", "TABLE1") \
...
[root@namenode phoenix]# findjar . org.apache.phoenix.jdbc.PhoenixDriver
Starting search for JAR files from directory .
Looking for the class org.apache.phoenix.jdbc.PhoenixDriver
This might take a while...
./phoenix-4.8.0-HBase-1.1-client.jar
./phoenix-4.8.0-HBase-1.1-server.jar
881-7889
> skype: cheyenne.forbes1
>
>
> On Tue, Sep 13, 2016 at 3:07 PM, dalin.qin <dalin...@gmail.com> wrote:
>
>> Hi Cheyenne ,
>>
>> That's a very interesting question, if secondary indexes are created well
>> on phoenix table , hbase will use co
cheyenne.forbes1
>
>
> On Tue, Sep 13, 2016 at 8:41 AM, Josh Mahonin <jmaho...@gmail.com> wrote:
>
>> Hi Dalin,
>>
>> Thanks for the information, I'm glad to hear that the spark integration
>> is working well for your use case.
>>
>> Josh
>&g
f you have any
> experiences or insight there from operating on a large dataset.
>
> Thanks!
>
> Josh
>
> On Mon, Sep 12, 2016 at 10:29 AM, dalin.qin <dalin...@gmail.com> wrote:
>
>> Hi ,
>> I've used phoenix table to store billions of rows , rows are
Hi ,
I've used phoenix table to store billions of rows , rows are incrementally
insert into phoenix by spark every day and the table was for instant query
from web page by providing primary key . so far so good .
Thanks
Dalin
On Mon, Sep 12, 2016 at 10:07 AM, Cheyenne Forbes <
umarappan:Desktop:linkedin.gif]
> <http://www.linkedin.com/in/kumarpalaniappan>
>
> On Sep 8, 2016, at 5:48 PM, dalin.qin <dalin...@gmail.com> wrote:
>
> try this:
>
> 0: jdbc:phoenix:namenode:2181:/hbase-unsecure> CREATE TABLE TABLE1 (ID
> BIGINT NOT NULL PRIMARY KEY, COL
try this:
0: jdbc:phoenix:namenode:2181:/hbase-unsecure> CREATE TABLE TABLE1 (ID
BIGINT NOT NULL PRIMARY KEY, COL1 VARCHAR);
No rows affected (1.287 seconds)
0: jdbc:phoenix:namenode:2181:/hbase-unsecure> UPSERT INTO TABLE1 (ID,
COL1) VALUES (1, 'test_row_1');
1 row affected (0.105 seconds)
0: