No, you’ll need to create a Phoenix table and use Phoenix APIs to write
your data.
On Tue, Feb 13, 2018 at 9:52 PM Vaghawan Ojha wrote:
> Thank you James, my keys are something like
> this: 2018-02-01-BM50558-1517454912.0-5-1517548497.261604 . the first few
> chars are
Thank you James, my keys are something like
this: 2018-02-01-BM50558-1517454912.0-5-1517548497.261604 . the first few
chars are the date. and these dates are stored in a seperate columns as
BDATE as well. Do you think I could implement the rowtimestamp in the BDATE
column?
Thanks
Vaghawan
On
Yes, the datetime column is part of my primary key, but primary key also
consists other strings.
Thanks
Vaghawan
On Tue, Feb 13, 2018 at 11:05 PM, James Taylor
wrote:
> The standard way of doing this is to add a TTL for your table [1]. You can
> do this through the
The standard way of doing this is to add a TTL for your table [1]. You can
do this through the ALTER TABLE call [2]. Is the date/time column part of
your primary key? If so, you can improve performance by declaring this
column as a ROW_TIMESTAMP [3].
A view is not going to help you - it's not
Hi Jacobo,
Please file a JIRA for asynchronous drop column functionality. There's a
few ways that could be implemented. We could execute the call that issues
the delete markers on the server-side in a separate thread (similar to what
we do with UPDATE STATISTICS), or we could support a map-reduce
Hi Flavio,
I was trying to find a different solution here. This doesn't seem like a
long term solution, as I expect the table to increase, and these new
timeouts may not be enough in the future. Also, I don't feel comfortable
increasing the timeouts that much.
- Is there any way of removing a
I also had similar troubles and I fixed them changing the following params
(both on server and client side and restarting hbase):
hbase.rpc.timeout (to 60)
phoenix.query.timeoutMs (to 60)
hbase.client.scanner.timeout.period (from 1 m to 10m)
hbase.regionserver.lease.period (from 1 m to
Hi all,
I have a table in phoenix with 100M rows and ~3000 columns. I am trying to
remove some columns, but after some seconds, it fails with a timeout
exception:
0: jdbc:phoenix:> ALTER TABLE "ns"."table" DROP COLUMN IF EXISTS "myColumn";
Error: org.apache.phoenix.exception.PhoenixIOException:
Hi,
I'm using phoenix 4.12 with hbase 1.2.0, I've a table with few millions of
rows, but I don't need much of the old data, Let's say the frequent data I
need is data from 2 month back.
the query become slow when I read the table using timestamp. So query would
be like where date>some date and