Hi Nicolas,
I tranformed hu_date (timestamp column) into bigint uisng to_number
function in Phoenix.
I created a new external table in hive as follows:
hive CREATE EXTERNAL TABLE phx_usage_day(
hu_date bigint,
hu_ho_id int,
hu_stream_id int,
hu_usage double)
Hello,
When we fire a delete, how do we set the timestamp for the mutation?
--
Email from people at capillarytech.com may not represent official policy of
Capillary Technologies unless explicitly stated. Please see our
Corporate-Email-Policy for details.Contents of this email are
Hi James - shall I still open a JIRA for that?
Thanks!
Marek
2015-04-06 22:48 GMT+02:00 Marek Wiewiorka marek.wiewio...@gmail.com:
psql from a csv file:
./psql.py dwh:2181:/hbase-unsecure -t SE_DWH.HOMES_USAGE_HOUR
/mnt/spark/export/usage_convert.txt/usage_merged.csv
Here is a sample:
Hi All - I keep on getting operation timeout errors while running queries:
Error: Operation timed out (state=TIM01,code=6000)
java.sql.SQLTimeoutException: Operation timed out
at
org.apache.phoenix.exception.SQLExceptionCode$14.newException(SQLExceptionCode.java:313)
at
Ok - I think I found it:
phoenix.query.timeoutMs
M.
2015-04-07 11:46 GMT+02:00 Marek Wiewiorka marek.wiewio...@gmail.com:
Hi All - I keep on getting operation timeout errors while running queries:
Error: Operation timed out (state=TIM01,code=6000)
java.sql.SQLTimeoutException: Operation
Hi Ralph, were you using the Phoenix bundled with HDP-2.2 or was that a
separate installation? Could you please copy/paste some log lines around the
time of a regionserver's crash (look for exceptions etc around that time in the
regionserver logs).
Thanks
Devaraj
On Apr 6, 2015, at 3:00 PM,
Also, beside each region server log file (.log) there's also the output
file (.out). Check the output files as well, as some serious crashes
scenarios bypass the logs and go directly to the out files.
-n
On Tuesday, April 7, 2015, Devaraj Das d...@hortonworks.com wrote:
Hi Ralph, were you
No, that's not possible. Phoenix needs to know the type information
and that's what the table/view definition is telling it.
Thanks,
James
On Tue, Apr 7, 2015 at 4:00 AM, Bradman, Dale
dale.brad...@capgemini.com wrote:
Hello,
Is it possible to issue a SELECT statement on a pre existing HBase
No, You need to at least define a view for that table. Phoenix wouldnt know
how to read columns/rowkeys of your table.
On Tue, Apr 7, 2015 at 4:00 AM, Bradman, Dale dale.brad...@capgemini.com
wrote:
Hello,
Is it possible to issue a SELECT statement on a pre existing HBase table
without
+1 to Thomas' idea. Please file a new JIRA - perhaps a subtask of
PHOENIX-400 for your idea.
Thanks,
James
On Tue, Apr 7, 2015 at 11:28 AM, Thomas D'Silva tdsi...@salesforce.com wrote:
Ashish,
If you want to step through server side code you can enable remote
debugging in hbase-env.sh. I
Ashish,
If you want to step through server side code you can enable remote
debugging in hbase-env.sh. I have used this with standalone mode.
# Enable remote JDWP debugging of major HBase processes. Meant for
Core Developers
# export HBASE_MASTER_OPTS=$HBASE_MASTER_OPTS -Xdebug
Thank you for the response
I am using Phoenix 4.3 as a separate installation.
Unfortunately I have no way to copy the actual log files so I will need to
transcribe as much as I can.
There are a lot of things going on – I’ll try to provide the highlights
Right now:
Using ambari – everything on
Thanks a lot Thomas, will try it out.
Just saw this line in PhoenixConnection class
this.scn = JDBCUtil.getCurrentSCN(url, this.info);
What does scn stand for ?
Regards,
Abhilash L L
Capillary Technologies
M:919886208262
abhil...@capillarytech.com | www.capillarytech.com
On Tue, Apr 7, 2015
I think it stands for System Change Number
On Tue, Apr 7, 2015 at 11:06 AM, Abhilash L L
abhil...@capillarytech.com wrote:
Thanks a lot Thomas, will try it out.
Just saw this line in PhoenixConnection class
this.scn = JDBCUtil.getCurrentSCN(url, this.info);
What does scn stand for ?
I ran “hbase hbck” and learned all the regions are inconsistent and have holes
to repair. I attempted to run “hbase hbck –repairHoles” and got stuck in a
loop with a message that a region is still in transition.
Is there a way to fix this?
Would it be more appropriate for me to move this
What is the major driver to not use the HDP bundled Phoenix?
It seems to me that the Phoenix version you have is not compatible with the
underlying HBase version, leading to all these issues. In particular, the
method getCatalogTracker in HDP-2.2 works only with 1 argument, but in Phoenix
Based on the Phoenix compatibility chart at the download page I did not expect
there to be issues with Phoenix 4.3 and Hbase 0.98.4.
http://phoenix.apache.org/download.html
From: Devaraj Das [mailto:d...@hortonworks.com]
Sent: Tuesday, April 07, 2015 12:58 PM
To: user@phoenix.apache.org
Hi,
Does Phoenix command-line utilities and the Bulk Loader program check for
version compatibilities between Phoenix and HBase?
Thanks
Naga
18 matches
Mail list logo