Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread Dong-iL, Kim
Oh. Thanks a lot. do you have a snippet for generating composite key? I’m sorry for my laziness. > On Aug 12, 2016, at 3:24 PM, vikashtalanki wrote: > > Hi Dong, > > If you still want to insert through hbase, you can use the below snippets > for encoding values as

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks for the detailed info. I took the advice of using the ASYNC method. The CREATE statement executes fine and I end up with an index table showing in state BUILDING. When I kick off the MR job with `hbase org.apache.phoenix.mapreduce.index.IndexTool --schema trans --data-table event

Re: Phoenix Ifnull

2016-08-12 Thread Michael McAllister
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is running, this could also be the issue in PHOENIX-2994:- https://issues.apache.org/jira/browse/PHOENIX-2994 Michael McAllister Staff Data Warehouse Engineer | Decision Systems

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread James Taylor
Hi Michael, SQL dictates that an index must be in the same schema as the table it's indexing. Thanks, James On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister < mmcallis...@homeaway.com> wrote: > Hi > > > > Is there any reason we can specify the schema name for a table, but not an > index. I

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread James Taylor
In your IndexTool invocation, try use all caps for your table and index name. Phoenix normalizes names by upper casing them (unless they're in double quotes). One other unrelated question: did you declare your event table with IMMUTABLE_ROWS=true (assuming it's a write-once table)? If not, you

Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks James, all CAPS did the trick! Yes, the event table is already IMMUTABLE_ROWS=true. Thanks again, Nathan On Fri, Aug 12, 2016 at 10:59 AM, James Taylor wrote: > In your IndexTool invocation, try use all caps for your table and index > name. Phoenix normalizes

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread Michael McAllister
James Thanks – looks like I was misled by DBVisualizer. The underlying hbase index tables automatically have the parent table’s schema name prepended, which is perfect. For some reason in the DBVisualizer object browser the indexes don’t show up in the correct schema, they’re showing up in a

[ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-12 Thread Ankit Singhal
Apache Phoenix enables OLTP and operational analytics for Hadoop through SQL support and integration with other projects in the ecosystem such as Spark, HBase, Pig, Flume, MapReduce and Hive. We're pleased to announce our 4.8.0 release which includes: - Local Index improvements[1] - Integration

Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser
Hi Youngwoo, The inclusion of hadoop-common is probably the source of most of the bloat. We really only needed the UserGroupInformation code, but Hadoop doesn't provide a proper artifact with just that dependency for us to use downstream. What dependency issues are you running into? There

Re: Tables can have schema name but indexes cannot

2016-08-12 Thread John Leach
Michael, The object browser in DBVisualizer is driven by the jdbc driver. If you get any weird interaction, it usually means the JDBC implementation has an issue. We had issues at Splice Machine with our Foreign Keys returning incorrectly and then realized any deviation from the spec causes

Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread vikashtalanki
I dont have a code snippet for composite key, but you can encode each field in the composite key and then do an array concatenation. http://stackoverflow.com/questions/80476/how-can-i-concatenate-two-arrays-in-java -- View this message in context:

Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-12 Thread Ryan Templeton
FYI… The sample data that I loaded in the table was based on the current timestamp with each additional row increasing that value by 1 minute so the current time up to 999,999 minutes into the future. Turns out this was a bug that prevents the scanner from reading timestamp values greater than