James,
Do you see any issues in using the delete statement below as a workaround
for dropping views until the JIRA's are fixed and released?
delete from SYSTEM.CATALOG where table_name = 'MY_VIEW'
Thanks,
Arun
Hi,
I am running into below error when I execute a query which has a join,group
by and order by.
But when I run the same query with hint /*+ USE_SORT_MERGE_JOIN*/ , it
runs well.
Can anybody please shed some light on this.
Error: Encountered exception in sub plan [0] execution. (state=,code=0)
Thanks for the help Gabriel, I really appreciate it. That did the trick!
Regards
Mike
On Jun 22, 2015, at 10:38 AM, Gabriel Reid
gabriel.r...@gmail.commailto:gabriel.r...@gmail.com wrote:
Hi Michael,
Thanks for the very detailed explanation of your scenario.
I believe the issue is in your
Arun,
Manually running DDL against the SYSTEM.CATALOG table can be
problematic for a few reasons:
- if a write failure occurs in the middle of running that statement,
your SYSTEM.CATALOG table can be left in an inconsistent state. We
prevent this internally by using a mutateRowsWithLocks call
You can upsert rows by sepecifying the column name and data type along
with the table in the select. For the example in
http://phoenix.apache.org/dynamic_columns.html
UPSERT INTO TABLE (eventId, eventTime, lastGCTime INTEGER) VALUES(1,
CURRENT_TIME(), 1234);
On Sun, Jun 21, 2015 at 6:51 PM,
That's interesting, I had a similar idea.
But first we would like to model the array with two tables, which sound to
be simpler. I contact you If I'll have time to work on the issue.
On Mon, Jun 22, 2015 at 12:10 AM James Taylor jamestay...@apache.org
wrote:
Hey Leon,
I filed PHOENIX-1544 a
I had a very large Hive table that I needed in HBase.
After asking around, I came to the conclusion that my best bet was to:
1 - export the hive table to a CSV 'file'/folder on the HDFS
2 - Use the org.apache.phoenix.mapreduce.CsvBulkLoadTool to import the data.
I found that if I tried to pass
For #2: hbase org.apache.hadoop.hbase.mapreduce.RowCounter TABLE_NAME
On Mon, Jun 22, 2015 at 11:34 AM, Riesland, Zack zack.riesl...@sensus.com
wrote:
I had a very large Hive table that I needed in HBase.
After asking around, I came to the conclusion that my best bet was to:
1 –
If you're interested in learning more about Phoenix, tune in this
Thursday @ 10am where I'll be talking about Phoenix in a free Webcast
hosted by O'Reilly: http://www.oreilly.com/pub/e/3443
Thanks,
James
For#2: You can use Row_Counter mapreduce job of HBase to count rows of
large table. You dont need to write any code.
Here is the sample command to invoke:
hbase org.apache.hadoop.hbase.mapreduce.RowCounter TABLE_NAME
~Anil
On Mon, Jun 22, 2015 at 12:08 PM, Ciureanu Constantin
Hi Bahubali,
Could you please share your query?
Thanks,
Maryann
On Mon, Jun 22, 2015 at 12:51 PM, Bahubali Jain bahub...@gmail.com wrote:
Hi,
I am running into below error when I execute a query which has a
join,group by and order by.
But when I run the same query with hint /*+
Hi there,
I was just looking for some tips on implementing a UDF to be used in a
GROUP BY statement.
For instance lets say I have the table:
( (A, B), C, D) with (A,B) being the composite key
My UDF targets the field C and I want to optimize the query:
SELECT A,MYFUNCTION(C),SUM(D) FROM
13 matches
Mail list logo