Hi MIch,
I would recommend you to use Phoenix API/Tools to write data to a Phoenix
table so that it can handle secondary index seamlessly. Your approach of
**Rebuilding** index after every bulkload will run into scalability
problems as your primary table keeps growing.
~Anil
On Sat, Oct 22,
Hi Ankit,
I created a dummy table in Hbase as below
create 'dummy', 'price_info'
Then in Phoenix I created a table on Hbase table
create table "dummy" (PK VARCHAR PRIMARY KEY, "price_info"."ticker"
VARCHAR,"price_info"."timecreated" VARCHAR, "price_info"."price" VARCHAR);
And then used the
bq. Will bulk load from Phoenix update the underlying Hbase table?
Yes. instead of using importTSV try to use CSV bulkload only.
bq. Do I need to replace Phoenix view on Hbase as with CREATE TABLE?
You can still keep VIEW.
Regards,
Ankit Singhal
On Sun, Oct 23, 2016 at 6:37 PM, Mich Talebzadeh
A workout I deployed was to rebuild the index immediately after bulk load
of data into Hbase table
ALTER INDEX MARKETDATAHBASE_IDX1 ON "marketDataHbase" REBUILD;
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Sorry Ted,
This is the syntax for view
create view "marketDataHbase" (PK VARCHAR PRIMARY KEY,
“price_info”.”ticker” VARCHAR,"price_info"."timecreated" VARCHAR,
"price_info"."price" VARCHAR);
Thanks James for clarification.
My understanding is that when one creates an index on a Phoenix view on
Hi Mich,
Phoenix indexes are only updated if you use Phoenix APIs to input the data.
Thanks,
James
On Saturday, October 22, 2016, Ted Yu wrote:
> The first statement creates index, not view.
>
> Can you check ?
>
> Cheers
>
> > On Oct 22, 2016, at 1:51 AM, Mich Talebzadeh
The first statement creates index, not view.
Can you check ?
Cheers
> On Oct 22, 2016, at 1:51 AM, Mich Talebzadeh
> wrote:
>
> Hi,
>
> I have a Hbase table that is populated via
> org.apache.hadoop.hbase.mapreduce.ImportTsv
> through bulk load ever 15 minutes.
Hi,
I have a Hbase table that is populated via
org.apache.hadoop.hbase.mapreduce.ImportTsv
through bulk load ever 15 minutes. This works fine.
In Phoenix I created a view on this table
jdbc:phoenix:rhes564:2181> create index marketDataHbase_idx on
"marketDataHbase" ("price_info"."ticker",