Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread Dong-iL, Kim
Oh. Thanks a lot.

do you have a snippet for generating composite key?

I’m sorry for my laziness. 


> On Aug 12, 2016, at 3:24 PM, vikashtalanki  wrote:
> 
> Hi Dong,
> 
> If you still want to insert through hbase, you can use the below snippets
> for encoding values as per phoenix. ---> import
> org.apache.phoenix.schema.types.*;
> 
> public static byte[] encodeDecimal(String value)
>   {
>   BigDecimal bigDecValue = new BigDecimal(value);
>   byte[] pDecimalByteArray = 
> PDecimal.INSTANCE.toBytes(bigDecValue);
>   return pDecimalByteArray;
>   }
>   
>   public static byte[] encodeDate(java.util.Date utilDate)
>   {
>   java.sql.Date sqlDate = new Date(utilDate.getTime());
>   byte[] pDateByteArray = PDate.INSTANCE.toBytes(sqlDate);
>   return pDateByteArray;
>   }
>   
>   public static byte[] encodeTimestamp(String utilTS)
>   {
>   java.sql.Timestamp sqlDate = Timestamp.valueOf(utilTS);
>   byte[] pTSByteArray = PTimestamp.INSTANCE.toBytes(sqlDate);
>   return pTSByteArray;
>   }
>   
>   public static byte[] encodeLong(String value)
>   {
>   byte[] pLongByteArray = PLong.INSTANCE.toBytes(value);
>   return pLongByteArray;
>   }
>   
>   public static byte[] encodeChar(String value)
>   {
>   byte[] pCharByteArray = PChar.INSTANCE.toBytes(value);
>   return pCharByteArray;
>   }
>   
>   public static byte[] encodeVarchar(String value)
>   {
>   byte[] pVarcharByteArray = PVarchar.INSTANCE.toBytes(value);
>   return pVarcharByteArray;
>   }
>   
>   public static byte[] encodeShort(String value)
>   {
>   Short shortValue = Short.parseShort(value);
>   byte[] pShortByteArray = PSmallint.INSTANCE.toBytes(shortValue);
>   return pShortByteArray;
>   }
>   
>   public static byte[] encodeInteger(String value)
>   {
>   int intValue = Integer.parseInt(value);
>   byte[] pIntByteArray = PInteger.INSTANCE.toBytes(intValue);
>   return pIntByteArray;
>   } 
> 
> 
> 
> --
> View this message in context: 
> http://apache-phoenix-user-list.1124778.n5.nabble.com/ERROR-201-22000-illegal-data-error-expected-length-at-least-4-but-had-tp2170p2202.html
> Sent from the Apache Phoenix User List mailing list archive at Nabble.com.



Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks for the detailed info. I took the advice of using the ASYNC method.
The CREATE statement executes fine and I end up with an index table showing
in state BUILDING. When I kick off the MR job with `hbase
org.apache.phoenix.mapreduce.index.IndexTool --schema trans --data-table
event --index-table event_object_id_idx_b --output-path
EVENT_OBJECT_ID_IDX_B_HFILES` I get this odd error:

2016-08-12 14:29:40,073 ERROR [main] index.IndexTool:  An exception occured
> while performing the indexing job : java.lang.IllegalArgumentException:
>  trans.event_object_id_idx_b is not an index table for trans.event
> at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:187)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:378)




My CREATE INDEX was as follows:

create index if not exists event_object_id_idx_b on trans.event (
> object_id
> ) ASYNC UPDATE_CACHE_FREQUENCY=6;



On Thu, Aug 11, 2016 at 9:40 PM, James Taylor 
wrote:

> Hi Nathan,
> If your table is large, I'd recommend creating your index asynchronously.
> To do that, you'd add the ASYNC keyword to the end of your CREATE INDEX
> call. In this case, the index will be built through Map Reduce in a more
> resilient manner (i.e. the client going up or down won't impact it and you
> have the regular retries of a MR job). On AWS, you'll need to manually
> start the MR job, but at SFDC we have a CRON job that'll start it for you
> automatically (this is open source too, so it'd be good to get that up and
> running in AWS as well). See https://phoenix.apache.
> org/secondary_indexing.html#Index_Population for details.
>
> If you don't run it asynchronously, then you'll need to increase the query
> timeout (i.e. phoenix.query.timeoutMs config property) to be larger than
> the time it'll take to build the index. If the client goes down before the
> CREATE INDEX call finished (or the query times out), then the index build
> will stop (and unfortunately will need to be run again).
>
> To monitor the index build, there are a few ways - if running through MR,
> then you can monitor the MR job in the standard way. If running
> synchronously, you can monitor the index table - you'll see new regions
> created as splits occur, or you could query the SYSTEM.STATS table (which
> gets populated as splits and compactions happen), or you could run a
> count(*) query directly against the index table (though that'll put more
> load on your system because it'll require a full table scan).
>
> HTH. Thanks,
> James
>
> On Thu, Aug 11, 2016 at 5:51 PM, Nathan Davis  > wrote:
>
>> Hi All,
>> I executed a CREATE INDEX against a fairly large table. And I received a
>> timeout error after a minute or two, which I understand is expected.
>> `!tables` in sqlline shows the index is still in BUILDING state after 2
>> hours, which may be accurate since it is a pretty large table and my
>> cluster is just a smallish EMR throwaway.
>>
>> My question is: Is there some way to verify that the index is in fact
>> still being built? Perhaps some HBase logs or the UI or some hbase shell
>> command? Unfortunately I am just as new to HBase as I am to Phoenix itself.
>>
>> Thanks,
>>  -nathan
>>
>
>


Re: Phoenix Ifnull

2016-08-12 Thread Michael McAllister
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is 
running, this could also be the issue in PHOENIX-2994:-

https://issues.apache.org/jira/browse/PHOENIX-2994

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.ha | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1F472.F62D5470]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Lukáš Lalinský 
Reply-To: "user@phoenix.apache.org" 
Date: Friday, August 12, 2016 at 4:57 AM
To: "user@phoenix.apache.org" 
Subject: Re: Phoenix Ifnull

I think this is a problem with the WHERE clause. NULL values are neither equal 
nor not-equal to any other values. You might need to add "OR API_KEY IS NULL" 
to the WHERE clause.

Lukas


On Fri, Aug 12, 2016 at 9:51 AM, ankit beohar 
> wrote:
Hi Lukáš/James,

I have one table in which only one rowkey is available and for my null check 
case I am firing below queries and in that null check is not working:-

[nline image 1]

Please see and let me know if I am doing right thing or missed something.

Best Regards,
ANKIT BEOHAR


On Thu, Aug 11, 2016 at 9:18 PM, James Taylor 
> wrote:
Lukáš is correct, but if the CASE WHEN variant of the same didn't work, please 
file a JIRA (ideally with a unit test that repros the problem).

Thanks,
James

On Thu, Aug 11, 2016 at 12:20 AM, Lukáš Lalinský 
> wrote:
On Thu, Aug 11, 2016 at 9:00 AM, ankit beohar 
> wrote:
I want to implement If null in apache phoenix like mysql.

select ifnull(rowkey,myvalue),ifnull(col1,mycolvalue) from table where 
rowkey='abc';

I tried below query but did not work:-

select case when APP_KEY is null then 'nullvalue' else APP_KEY end from 
"CheckDump";

The standard SQL function for this is coalesce(col, 'nullvalue').

https://phoenix.apache.org/language/functions.html#coalesce

Lukas





Re: Tables can have schema name but indexes cannot

2016-08-12 Thread James Taylor
Hi Michael,
SQL dictates that an index must be in the same schema as the table it's
indexing.
Thanks,
James

On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister <
mmcallis...@homeaway.com> wrote:

> Hi
>
>
>
> Is there any reason we can specify the schema name for a table, but not an
> index. I note that the grammar online makes it clear this isn’t part of the
> syntax, but it would be nice if we could do it. To illustrate what I’d
> like:-
>
>
>
> -- Create the table
>
>
>
> CREATE TABLE IF NOT EXISTS MMCALLISTER.TEST
>
> ( c1 INTEGER NOT NULL
>
> , c2 VARCHAR NULL
>
> , c3 VARCHAR NULL
>
> , CONSTRAINT TEST_PK PRIMARY KEY (c1)
>
> );
>
>
>
> -- This does not work
>
>
>
> CREATE INDEX IF NOT EXISTS MMCALLISTER.TEST_IX01
>
> ON MMCALLISTER.TEST
>
> ( c2 )
>
> INCLUDE ( c3 );
>
>
>
> -- This works
>
>
>
> CREATE INDEX IF NOT EXISTS TEST_IX01
>
> ON MMCALLISTER.TEST
>
> ( c2 )
>
> INCLUDE ( c3 );
>
>
>
> Michael McAllister
>
> Staff Data Warehouse Engineer | Decision Systems
>
> mmcallis...@homeaway.com | C: 512.423.7447 | skype: michael.mcallister.ha
>  | webex: https://h.a/mikewebex
>
> This electronic communication (including any attachment) is confidential.
> If you are not an intended recipient of this communication, please be
> advised that any disclosure, dissemination, distribution, copying or other
> use of this communication or any attachment is strictly prohibited.  If you
> have received this communication in error, please notify the sender
> immediately by reply e-mail and promptly destroy all electronic and printed
> copies of this communication and any attachment.
>


Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread James Taylor
In your IndexTool invocation, try use all caps for your table and index
name. Phoenix normalizes names by upper casing them (unless they're in
double quotes).

One other unrelated question: did you declare your event table with
IMMUTABLE_ROWS=true (assuming it's a write-once table)? If not, you can use
the ALTER TABLE trans.event SET IMMUTABLE_ROWS=true command to change it.
This will give you some performance benefit.

Thanks,
James

On Fri, Aug 12, 2016 at 7:39 AM, Nathan Davis 
wrote:

> Thanks for the detailed info. I took the advice of using the ASYNC method.
> The CREATE statement executes fine and I end up with an index table showing
> in state BUILDING. When I kick off the MR job with `hbase
> org.apache.phoenix.mapreduce.index.IndexTool --schema trans --data-table
> event --index-table event_object_id_idx_b --output-path
> EVENT_OBJECT_ID_IDX_B_HFILES` I get this odd error:
>
> 2016-08-12 14:29:40,073 ERROR [main] index.IndexTool:  An exception
>> occured while performing the indexing job : 
>> java.lang.IllegalArgumentException:
>>  trans.event_object_id_idx_b is not an index table for trans.event
>> at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:187)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>> at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:378)
>
>
>
>
> My CREATE INDEX was as follows:
>
> create index if not exists event_object_id_idx_b on trans.event (
>> object_id
>> ) ASYNC UPDATE_CACHE_FREQUENCY=6;
>
>
>
> On Thu, Aug 11, 2016 at 9:40 PM, James Taylor 
> wrote:
>
>> Hi Nathan,
>> If your table is large, I'd recommend creating your index asynchronously.
>> To do that, you'd add the ASYNC keyword to the end of your CREATE INDEX
>> call. In this case, the index will be built through Map Reduce in a more
>> resilient manner (i.e. the client going up or down won't impact it and you
>> have the regular retries of a MR job). On AWS, you'll need to manually
>> start the MR job, but at SFDC we have a CRON job that'll start it for you
>> automatically (this is open source too, so it'd be good to get that up and
>> running in AWS as well). See https://phoenix.apache.org
>> /secondary_indexing.html#Index_Population for details.
>>
>> If you don't run it asynchronously, then you'll need to increase the
>> query timeout (i.e. phoenix.query.timeoutMs config property) to be larger
>> than the time it'll take to build the index. If the client goes down before
>> the CREATE INDEX call finished (or the query times out), then the index
>> build will stop (and unfortunately will need to be run again).
>>
>> To monitor the index build, there are a few ways - if running through MR,
>> then you can monitor the MR job in the standard way. If running
>> synchronously, you can monitor the index table - you'll see new regions
>> created as splits occur, or you could query the SYSTEM.STATS table (which
>> gets populated as splits and compactions happen), or you could run a
>> count(*) query directly against the index table (though that'll put more
>> load on your system because it'll require a full table scan).
>>
>> HTH. Thanks,
>> James
>>
>> On Thu, Aug 11, 2016 at 5:51 PM, Nathan Davis <
>> nathan.da...@salesforce.com> wrote:
>>
>>> Hi All,
>>> I executed a CREATE INDEX against a fairly large table. And I received a
>>> timeout error after a minute or two, which I understand is expected.
>>> `!tables` in sqlline shows the index is still in BUILDING state after 2
>>> hours, which may be accurate since it is a pretty large table and my
>>> cluster is just a smallish EMR throwaway.
>>>
>>> My question is: Is there some way to verify that the index is in fact
>>> still being built? Perhaps some HBase logs or the UI or some hbase shell
>>> command? Unfortunately I am just as new to HBase as I am to Phoenix itself.
>>>
>>> Thanks,
>>>  -nathan
>>>
>>
>>
>


Re: monitoring status of CREATE INDEX operation

2016-08-12 Thread Nathan Davis
Thanks James, all CAPS did the trick!

Yes, the event table is already IMMUTABLE_ROWS=true.

Thanks again,
Nathan


On Fri, Aug 12, 2016 at 10:59 AM, James Taylor 
wrote:

> In your IndexTool invocation, try use all caps for your table and index
> name. Phoenix normalizes names by upper casing them (unless they're in
> double quotes).
>
> One other unrelated question: did you declare your event table with
> IMMUTABLE_ROWS=true (assuming it's a write-once table)? If not, you can use
> the ALTER TABLE trans.event SET IMMUTABLE_ROWS=true command to change it.
> This will give you some performance benefit.
>
> Thanks,
> James
>
> On Fri, Aug 12, 2016 at 7:39 AM, Nathan Davis  > wrote:
>
>> Thanks for the detailed info. I took the advice of using the ASYNC
>> method. The CREATE statement executes fine and I end up with an index table
>> showing in state BUILDING. When I kick off the MR job with `hbase
>> org.apache.phoenix.mapreduce.index.IndexTool --schema trans --data-table
>> event --index-table event_object_id_idx_b --output-path
>> EVENT_OBJECT_ID_IDX_B_HFILES` I get this odd error:
>>
>> 2016-08-12 14:29:40,073 ERROR [main] index.IndexTool:  An exception
>>> occured while performing the indexing job : 
>>> java.lang.IllegalArgumentException:
>>>  trans.event_object_id_idx_b is not an index table for trans.event
>>> at org.apache.phoenix.mapreduce.index.IndexTool.run(IndexTool.java:187)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>>> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
>>> at org.apache.phoenix.mapreduce.index.IndexTool.main(IndexTool.java:378)
>>>
>>
>>
>>
>>
>> My CREATE INDEX was as follows:
>>
>> create index if not exists event_object_id_idx_b on trans.event (
>>> object_id
>>> ) ASYNC UPDATE_CACHE_FREQUENCY=6;
>>
>>
>>
>> On Thu, Aug 11, 2016 at 9:40 PM, James Taylor 
>> wrote:
>>
>>> Hi Nathan,
>>> If your table is large, I'd recommend creating your index
>>> asynchronously. To do that, you'd add the ASYNC keyword to the end of your
>>> CREATE INDEX call. In this case, the index will be built through Map Reduce
>>> in a more resilient manner (i.e. the client going up or down won't impact
>>> it and you have the regular retries of a MR job). On AWS, you'll need to
>>> manually start the MR job, but at SFDC we have a CRON job that'll start it
>>> for you automatically (this is open source too, so it'd be good to get that
>>> up and running in AWS as well). See https://phoenix.apache.org
>>> /secondary_indexing.html#Index_Population for details.
>>>
>>> If you don't run it asynchronously, then you'll need to increase the
>>> query timeout (i.e. phoenix.query.timeoutMs config property) to be larger
>>> than the time it'll take to build the index. If the client goes down before
>>> the CREATE INDEX call finished (or the query times out), then the index
>>> build will stop (and unfortunately will need to be run again).
>>>
>>> To monitor the index build, there are a few ways - if running through
>>> MR, then you can monitor the MR job in the standard way. If running
>>> synchronously, you can monitor the index table - you'll see new regions
>>> created as splits occur, or you could query the SYSTEM.STATS table (which
>>> gets populated as splits and compactions happen), or you could run a
>>> count(*) query directly against the index table (though that'll put more
>>> load on your system because it'll require a full table scan).
>>>
>>> HTH. Thanks,
>>> James
>>>
>>> On Thu, Aug 11, 2016 at 5:51 PM, Nathan Davis <
>>> nathan.da...@salesforce.com> wrote:
>>>
 Hi All,
 I executed a CREATE INDEX against a fairly large table. And I received
 a timeout error after a minute or two, which I understand is expected.
 `!tables` in sqlline shows the index is still in BUILDING state after 2
 hours, which may be accurate since it is a pretty large table and my
 cluster is just a smallish EMR throwaway.

 My question is: Is there some way to verify that the index is in fact
 still being built? Perhaps some HBase logs or the UI or some hbase shell
 command? Unfortunately I am just as new to HBase as I am to Phoenix itself.

 Thanks,
  -nathan

>>>
>>>
>>
>


Re: Tables can have schema name but indexes cannot

2016-08-12 Thread Michael McAllister
James

Thanks – looks like I was misled by DBVisualizer. The underlying hbase index 
tables automatically have the parent table’s schema name prepended, which is 
perfect. For some reason in the DBVisualizer object browser the indexes don’t 
show up in the correct schema, they’re showing up in a schema named (null).

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.ha | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1F48E.CF258110]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: James Taylor 
Reply-To: "user@phoenix.apache.org" 
Date: Friday, August 12, 2016 at 10:56 AM
To: user 
Subject: Re: Tables can have schema name but indexes cannot

Hi Michael,
SQL dictates that an index must be in the same schema as the table it's 
indexing.
Thanks,
James

On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister 
> wrote:
Hi

Is there any reason we can specify the schema name for a table, but not an 
index. I note that the grammar online makes it clear this isn’t part of the 
syntax, but it would be nice if we could do it. To illustrate what I’d like:-

-- Create the table

CREATE TABLE IF NOT EXISTS MMCALLISTER.TEST
( c1 INTEGER NOT NULL
, c2 VARCHAR NULL
, c3 VARCHAR NULL
, CONSTRAINT TEST_PK PRIMARY KEY (c1)
);

-- This does not work

CREATE INDEX IF NOT EXISTS MMCALLISTER.TEST_IX01
ON MMCALLISTER.TEST
( c2 )
INCLUDE ( c3 );

-- This works

CREATE INDEX IF NOT EXISTS TEST_IX01
ON MMCALLISTER.TEST
( c2 )
INCLUDE ( c3 );

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com | C: 
512.423.7447 | skype: 
michael.mcallister.ha | webex: 
https://h.a/mikewebex
[cid:image002.png@01D1F48E.CF258110]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



[ANNOUNCE] Apache Phoenix 4.8.0 released

2016-08-12 Thread Ankit Singhal
Apache Phoenix enables OLTP and operational analytics for Hadoop through
SQL support and integration with other projects in the ecosystem such as
Spark, HBase, Pig, Flume, MapReduce and Hive.

We're pleased to announce our 4.8.0 release which includes:
- Local Index improvements[1]
- Integration with hive[2]
- Namespace mapping support[3]
- VIEW enhancements[4]
- Offset support for paged queries[5]
- 130+ Bugs resolved[6]
- HBase v1.2 is also supported ( with continued support for v1.1, v1.0 &
v0.98)
- Many performance enhancements(related to StatsCache, distinct, Serial
query with Stats etc)[6]

The release is available in source or binary form here [7].

Release artifacts are signed with the following key:
*https://people.apache.org/keys/committer/ankit.asc
*

Thanks,
The Apache Phoenix Team

[1] https://issues.apache.org/jira/browse/PHOENIX-1734
[2] https://issues.apache.org/jira/browse/PHOENIX-2743
[3] https://issues.apache.org/jira/browse/PHOENIX-1311
[4] https://issues.apache.org/jira/browse/PHOENIX-1508
[5] https://issues.apache.org/jira/browse/PHOENIX-2722
[6] 
*https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12334393=12315120
*
[7] https://phoenix.apache.org/download.html


Re: Phoenix-queryserver-client jar is too fat in 4.8.0

2016-08-12 Thread Josh Elser

Hi Youngwoo,

The inclusion of hadoop-common is probably the source of most of the 
bloat. We really only needed the UserGroupInformation code, but Hadoop 
doesn't provide a proper artifact with just that dependency for us to 
use downstream.


What dependency issues are you running into? There was an effort in 
4.8.0 to relocate dependencies with the shade-plugin. If you seeing 
conflicts, we would be very interested in fixing these.


Over time, you are definitely right that we can try to prune down 
unnecessary dependencies further.


- Josh

김영우 (YoungWoo Kim) wrote:

Hi,

It looks like thin client jar in 4.8.0 (rc2) is fat. I'm runng into
problem to make my applications working with 4.8.0 thin driver. There
are many conflicts with my existing dependencies. 28mb for thin client
jar is strange for me.

I missed something about new thin client?

Youngwoo



Re: Tables can have schema name but indexes cannot

2016-08-12 Thread John Leach
Michael,

The object browser in DBVisualizer is driven by the jdbc driver.  If you get 
any weird interaction, it usually means the JDBC implementation has an issue.  
We had issues at Splice Machine with our Foreign Keys returning incorrectly and 
then realized any deviation from the spec causes significant issues on the 
application side.

Regards,
John

> On Aug 12, 2016, at 11:43 AM, Michael McAllister  
> wrote:
> 
> James
>  
> Thanks – looks like I was misled by DBVisualizer. The underlying hbase index 
> tables automatically have the parent table’s schema name prepended, which is 
> perfect. For some reason in the DBVisualizer object browser the indexes don’t 
> show up in the correct schema, they’re showing up in a schema named (null).
>  
> Michael McAllister
> Staff Data Warehouse Engineer | Decision Systems
> mmcallis...@homeaway.com  | C: 512.423.7447 
> | skype: michael.mcallister.ha  | webex: 
> https://h.a/mikewebex 
> 
> This electronic communication (including any attachment) is confidential.  If 
> you are not an intended recipient of this communication, please be advised 
> that any disclosure, dissemination, distribution, copying or other use of 
> this communication or any attachment is strictly prohibited.  If you have 
> received this communication in error, please notify the sender immediately by 
> reply e-mail and promptly destroy all electronic and printed copies of this 
> communication and any attachment.
>  
> From: James Taylor >
> Reply-To: "user@phoenix.apache.org " 
> >
> Date: Friday, August 12, 2016 at 10:56 AM
> To: user >
> Subject: Re: Tables can have schema name but indexes cannot
>  
> Hi Michael, 
> SQL dictates that an index must be in the same schema as the table it's 
> indexing.
> Thanks,
> James
>  
> On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister  > wrote:
> Hi
>  
> Is there any reason we can specify the schema name for a table, but not an 
> index. I note that the grammar online makes it clear this isn’t part of the 
> syntax, but it would be nice if we could do it. To illustrate what I’d like:-
>  
> -- Create the table
>  
> CREATE TABLE IF NOT EXISTS MMCALLISTER.TEST
> ( c1 INTEGER NOT NULL
> , c2 VARCHAR NULL
> , c3 VARCHAR NULL
> , CONSTRAINT TEST_PK PRIMARY KEY (c1)
> );
>  
> -- This does not work
>  
> CREATE INDEX IF NOT EXISTS MMCALLISTER.TEST_IX01
> ON MMCALLISTER.TEST
> ( c2 )
> INCLUDE ( c3 );
>  
> -- This works
>  
> CREATE INDEX IF NOT EXISTS TEST_IX01
> ON MMCALLISTER.TEST
> ( c2 )
> INCLUDE ( c3 );
>  
> Michael McAllister
> Staff Data Warehouse Engineer | Decision Systems
> mmcallis...@homeaway.com  | C: 512.423.7447 
>  | skype: michael.mcallister.ha 
>  | webex: https://h.a/mikewebex 
> 
> 
> This electronic communication (including any attachment) is confidential.  If 
> you are not an intended recipient of this communication, please be advised 
> that any disclosure, dissemination, distribution, copying or other use of 
> this communication or any attachment is strictly prohibited.  If you have 
> received this communication in error, please notify the sender immediately by 
> reply e-mail and promptly destroy all electronic and printed copies of this 
> communication and any attachment.
>  



Re: ERROR 201 (22000) illegal data error, expected length at least 4 but had ...

2016-08-12 Thread vikashtalanki
I dont have a code snippet for composite key, but you can encode each field
in the composite key and then do an array concatenation.
http://stackoverflow.com/questions/80476/how-can-i-concatenate-two-arrays-in-java



--
View this message in context: 
http://apache-phoenix-user-list.1124778.n5.nabble.com/ERROR-201-22000-illegal-data-error-expected-length-at-least-4-but-had-tp2170p2249.html
Sent from the Apache Phoenix User List mailing list archive at Nabble.com.


Re: Problems with Phoenix bulk loader when using row_timestamp feature

2016-08-12 Thread Ryan Templeton
FYI…

The sample data that I loaded in the table was based on the current timestamp 
with each additional row increasing that value by 1 minute so the current time 
up to 999,999 minutes into the future. Turns out this was a bug that prevents 
the scanner from reading timestamp values greater than the current time. More 
details here: https://issues.apache.org/jira/browse/PHOENIX-3176



From: default >
Reply-To: "user@phoenix.apache.org" 
>
Date: Wednesday, August 10, 2016 at 1:12 PM
To: "user@phoenix.apache.org" 
>
Subject: Re: Problems with Phoenix bulk loader when using row_timestamp feature


0: jdbc:phoenix:localhost:2181> explain select count(*) from historian.data;

+--+

|   PLAN   |

+--+

| CLIENT 1-CHUNK PARALLEL 1-WAY FULL SCAN OVER HISTORIAN.DATA |

| ROW TIMESTAMP FILTER [0, 1470852712807) |

| SERVER FILTER BY FIRST KEY ONLY  |

| SERVER AGGREGATE INTO SINGLE ROW |

+--+

4 rows selected (0.071 seconds)

From: Samarth Jain >
Reply-To: "user@phoenix.apache.org" 
>
Date: Wednesday, August 10, 2016 at 12:05 AM
To: "user@phoenix.apache.org" 
>
Subject: Re: Problems with Phoenix bulk loader when using row_timestamp feature

Ryan,

Can you tell us what the explain plan says for the select count(*) query.

- Samarth


On Tue, Aug 9, 2016 at 12:58 PM, Ryan Templeton 
> wrote:
I am working on a project that will be consuming sensor data. The “fact” table 
is defined as:

CREATE TABLE historian.data (
assetid unsigned_int not null,
metricid unsigned_int not null,
ts timestamp not null,
val double
CONSTRAINT pk PRIMARY KEY (assetid, metricid, tsp))
IMMUTABLE_ROWS=true;

I generated a 1million row csv sample dataset and use the Phoenix bulk loader 
to load this data up. The tool reports that all 1,000,000 rows were loaded 
successfully which I can confirm via sqlline.

I then dropped and recreated the table to take advantage of the row_timestamp 
feature

drop table historian.data;
CREATE TABLE historian.data (
assetid unsigned_int not null,
metricid unsigned_int not null,
ts timestamp not null,
val double
CONSTRAINT pk PRIMARY KEY (assetid, metricid, ts row_timestamp))
IMMUTABLE_ROWS=true;

I reran the bulk loader utility which says it completed successfully

[rtempleton@M1 phoenix-client]$ bin/psql.py localhost:2181 -t HISTORIAN.DATA 
/tmp/data.csv

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.3.0-180/phoenix/phoenix-4.4.0.2.4.3.0-180-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in 
[jar:file:/usr/hdp/2.4.3.0-180/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

16/08/08 20:34:43 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable

16/08/08 20:34:44 WARN shortcircuit.DomainSocketFactory: The short-circuit 
local reads feature cannot be used because libhadoop cannot be loaded.

csv columns from database.

CSV Upsert complete. 100 rows upserted

Time: 65.985 sec(s)

But when I run “select count(*) from historian.data” I see that only the first 
572 rows appear in the table. These rows correlate to the the first 572 rows of 
the input file.

0: jdbc:phoenix:localhost:2181> select count(*) from historian.data;

+--+

| COUNT(1) |

+--+

| 572  |

+--+

1 row selected (4.541 seconds)

0: jdbc:phoenix:localhost:2181> select min(ts), max(ts) from historian.data;

+--+--+

| MIN(TS)  | MAX(TS)
  |

+--+--+

| 2016-08-08 11:05:15.000  | 2016-08-08 20:36:15.000
  |

+--+—+



Any ideas?


Thanks,
Ryan