This would be really useful. The use case I have that is similar is to map 
Phoenix data to Hive (but the subset of Hive that Impala understands). I 
imagine it could work by reading the System.catalog table, or connection 
metadata, and generating Hive create table statements. There would need to be 
UDFs to split apart row keys and transform data, e.g. flipping the 1st byte of 
numeric types. You could use the same logic in the UDFs to read the data from a 
standalone hbase client.

> On Sep 16, 2016, at 11:15 AM, Krishna <research...@gmail.com> wrote:
> 
> Hi,
> 
> Does Phoenix have API for converting a rowkey (made up of multiple columns) 
> and in ImmutableBytesRow format to split into primary key columns? I am 
> performing a scan directly from HBase and would like to convert the rowkey 
> into column values. We used Phoenix standard JDBC API while writing to the 
> table. 
> 
> Thanks

Reply via email to