Re: Decoding Phoenix DOUBLE columns

2016-10-17 Thread Michael McAllister
Correction - the function is meant to DECODE the double columns, not encode 
them.

Sent from my iPhone

On Oct 17, 2016, at 2:43 AM, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:

Hi

I'm trying to access a Phoenix table from Datameer. Datameer can read HBase, 
but doesn't support using the Phoenix JDBC driver, as the Phoenix JDBC driver 
only does a partial implementation of the JDBC standard.

To get around this I'm using the Datameer SDK to write a function that will 
encode columns of type double. Unfortunately the decoding isn't going well. Let 
me give you a little test case. All of the following is done with Phoenix 4.4 
on HDP 2.3.4

Here's the table DDL:-

create table MCALLISTERM.TEST
( FIRSTNAME VARCHAR NOT NULL
, LASTNAME VARCHAR NOT NULL
, CITY VARCHAR NULL
, RATE DOUBLE NULL
, CONSTRAINT TEST_PK PRIMARY KEY (FIRSTNAME, LASTNAME)
);

Here's a row of test data:-

UPSERT INTO MCALLISTERM.TEST( FIRSTNAME, LASTNAME, CITY, RATE )
VALUES ('Richard', 'Jones', 'San Diego', 137.47 );

When you view this table in Datameer via the HBase driver, it shows up as 
follows:-



The rowkey is fairly easing separated by tokenizing via the null byte 
character. The rate column shows up as it does due to the encoding Phoenix has 
done.

Here is the guts of the conversion function I'm writing ...

package com.datameer.sdk;

import datameer.dap.sdk.function.BaseSimpleFunctionType;
import datameer.dap.sdk.function.FieldType;
import datameer.dap.sdk.function.FunctionGroup;
import datameer.dap.sdk.function.argument.ArgumentType;
import datameer.dap.sdk.schema.ValueTypeId;
import org.apache.phoenix.schema.types.PDouble;
import org.apache.phoenix.schema.SortOrder;



public class ApachePhoenixDoubleDecode extends BaseSimpleFunctionType {

public ApachePhoenixDoubleDecode()
{
super( FunctionGroup.ENCODING
, "ApachePhoenixDoubleDecode"
, "Converts Apache Phoenix double encoded bytestream into float"
, new ArgumentType[]{ new ArgumentType("Input", ValueTypeId.STRING) 
}
);
}

@Override
protected Object compute(Object... arg0) {
byte[] bytes = ((String)arg0[0]).getBytes();
int offset = 0;

return PDouble.INSTANCE.getCodec().decodeDouble(bytes, offset, 
SortOrder.ASC );
}

@Override
public FieldType computeReturnType(FieldType... arg0) {
return FieldType.FLOAT;
}

}

As you probably suspect, I'm having to guess at the offset and the sortorder.

Here's what shows up when I use this function in datameer on the single row in 
the table, which should be returning 137.47:-



Any pointers on where I've gone wrong, and how to correct my plugin?

Regards

Mike



Decoding Phoenix DOUBLE columns

2016-10-17 Thread Michael McAllister
Hi

I’m trying to access a Phoenix table from Datameer. Datameer can read HBase, 
but doesn’t support using the Phoenix JDBC driver, as the Phoenix JDBC driver 
only does a partial implementation of the JDBC standard.

To get around this I’m using the Datameer SDK to write a function that will 
encode columns of type double. Unfortunately the decoding isn’t going well. Let 
me give you a little test case. All of the following is done with Phoenix 4.4 
on HDP 2.3.4

Here’s the table DDL:-

create table MCALLISTERM.TEST
( FIRSTNAME VARCHAR NOT NULL
, LASTNAME VARCHAR NOT NULL
, CITY VARCHAR NULL
, RATE DOUBLE NULL
, CONSTRAINT TEST_PK PRIMARY KEY (FIRSTNAME, LASTNAME)
);

Here’s a row of test data:-

UPSERT INTO MCALLISTERM.TEST( FIRSTNAME, LASTNAME, CITY, RATE )
VALUES ('Richard', 'Jones', 'San Diego', 137.47 );

When you view this table in Datameer via the HBase driver, it shows up as 
follows:-

[cid:image001.png@01D22820.3DE68DF0]

The rowkey is fairly easing separated by tokenizing via the null byte 
character. The rate column shows up as it does due to the encoding Phoenix has 
done.

Here is the guts of the conversion function I’m writing …

package com.datameer.sdk;

import datameer.dap.sdk.function.BaseSimpleFunctionType;
import datameer.dap.sdk.function.FieldType;
import datameer.dap.sdk.function.FunctionGroup;
import datameer.dap.sdk.function.argument.ArgumentType;
import datameer.dap.sdk.schema.ValueTypeId;
import org.apache.phoenix.schema.types.PDouble;
import org.apache.phoenix.schema.SortOrder;



public class ApachePhoenixDoubleDecode extends BaseSimpleFunctionType {

public ApachePhoenixDoubleDecode()
{
super( FunctionGroup.ENCODING
, "ApachePhoenixDoubleDecode"
, "Converts Apache Phoenix double encoded bytestream into float"
, new ArgumentType[]{ new ArgumentType("Input", ValueTypeId.STRING) 
}
);
}

@Override
protected Object compute(Object... arg0) {
byte[] bytes = ((String)arg0[0]).getBytes();
int offset = 0;

return PDouble.INSTANCE.getCodec().decodeDouble(bytes, offset, 
SortOrder.ASC );
}

@Override
public FieldType computeReturnType(FieldType... arg0) {
return FieldType.FLOAT;
}

}

As you probably suspect, I’m having to guess at the offset and the sortorder.

Here’s what shows up when I use this function in datameer on the single row in 
the table, which should be returning 137.47:-

[cid:image002.png@01D22820.3DE68DF0]

Any pointers on where I’ve gone wrong, and how to correct my plugin?

Regards

Mike



RE: can I prevent rounding of a/b when a and b are integers

2016-09-22 Thread Michael McAllister
Actually, in Oracle, at least the last time I was working on it, if you divided 
an int column by an int column, you got an int result which, if I remember 
correctly, was truncated.

For people who write SQL, this is a fairly well known issue that we confront, 
and solve using the solution proposed below. Either by multiplying one of the 
operands by 1.0, or casting one of them as a real number.

Additionally, don’t lose sight of the warning James Heather raised in one of 
the earlier responses. Make sure you restrict what your users do. Having a 
table dropped could end up being very expensive.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Bulvik, Noam [mailto:noam.bul...@teoco.com]
Sent: Thursday, September 22, 2016 5:25 AM
To: user@phoenix.apache.org
Subject: RE: can I prevent rounding of a/b when a and b are integers

I checked both oracle and impala and in both 1/3 is 0.333 and not 0.
I think that SQL writer is not a programmer and he  does not care about data 
type. He just want to get the correct results. BTW – it is not only constant, 
even expression like sum(case when  then 1 else 0 end) / count(*) 
will get wrong results unless you will start implementing workarounds

From: John Hancock [mailto:jhancock1...@gmail.com]
Sent: Thursday, September 22, 2016 12:56 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: can I prevent rounding of a/b when a and b are integers

I think it is fairly standard in programming languages for / to mean integer 
division when the operands used are integers.  Therefore 1/3=0 is not a 
surprising result to me; other programming languages I have worked in give 
1/3=0.  However if one of the operands to / is a decimal, the result is also 
decimal, so 1.0/3 = 0.333... I don't think the behavior of / is incorrect as is.

On Thu, Sep 22, 2016 at 4:45 AM, Heather, James (ELS-LON) 
<james.heat...@elsevier.com<mailto:james.heat...@elsevier.com>> wrote:


On Thu, 2016-09-22 at 05:39 +, Bulvik, Noam wrote:

We have an app that let user write their own SQL

Um, do they write DROP TABLE statements in there?



Elsevier Limited. Registered Office: The Boulevard, Langford Lane, Kidlington, 
Oxford, OX5 1GB, United Kingdom, Registration No. 1982084, Registered in 
England and Wales.




PRIVILEGED AND CONFIDENTIAL
PLEASE NOTE: The information contained in this message is privileged and 
confidential, and is intended only for the use of the individual to whom it is 
addressed and others who have been specifically authorized to receive it. If 
you are not the intended recipient, you are hereby notified that any 
dissemination, distribution or copying of this communication is strictly 
prohibited. If you have received this communication in error, or if any 
problems occur with transmission, please contact sender. Thank you.


Re: Using COUNT() with columns that don't use COUNT() when the table is join fails

2016-09-19 Thread Michael McAllister
This is really an ANSI SQL question. If you use an aggregate function, then you 
need to specify what columns to group by. Any columns not being referenced in 
the aggregate function(s) need to be in the GROUP BY statement.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D21273.F8F1C960]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Cheyenne Forbes <cheyenne.osanu.for...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Monday, September 19, 2016 at 10:50 AM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Using COUNT() with columns that don't use COUNT() when the table 
is join fails

I was wondering because it seems extra wordy


Re: Enabling region replication on Phoenix metadata tables

2016-09-08 Thread Michael McAllister
James

I’m not talking about replication between different clusters. Instead I’m 
talking about region replication within the same cluster for High Availability 
purposes. An overview is here:-

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_hadoop-ha/content/ha-hbase-intro.html

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D209E4.8C3C4800]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: James Taylor <jamestay...@apache.org>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Thursday, September 8, 2016 at 3:04 PM
To: user <user@phoenix.apache.org>
Subject: Re: Enabling region replication on Phoenix metadata tables

Take a look at this[1] thread for a discussion on replication of system tables. 
You can replicate the SYSTEM.CATALOG table, but you have to be very careful. 
Make sure to disable and discard replicated data for SYSTEM.CATALOG while any 
Phoenix upgrade is in progress (i.e. first connection after upgrading to a new 
minor release) or any physical tables are being created through DDL statements 
(which presumably would be run on both primary and secondary cluster). 
Otherwise, you'll run into race conditions between the upgrade code and 
replication occurring.


[1] http://search-hadoop.com/m/9UY0h2JDL91232UBQ1=Replication+

On Thu, Sep 8, 2016 at 12:03 PM, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
You did alter specific tables to enable region replication on them, or create 
new tables with region replication, correct?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 
512.423.7447 | skype: 
michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D209E4.8C3C4800]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: "Vommi, Nithin" <nithin.vo...@intel.com<mailto:nithin.vo...@intel.com>>
Reply-To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Date: Thursday, September 8, 2016 at 1:47 PM

To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Subject: RE: Enabling region replication on Phoenix metadata tables

I meant, The region servers were never active and did not show up on UI

From: Michael McAllister 
[mailto:mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>]
Sent: Thursday, September 8, 2016 11:46 AM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Enabling region replication on Phoenix metadata tables

Nithin

>
when I tried enabling region replication, I was not able to bring the HBase 
cluster.
>

I’m not sure what you mean here. Specifically referring to “bring the HBase 
cluster”.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 
512.423.7447 | skype: 
michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image003.png@01D209E4.8C3C4800]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: "Vommi, Nithin" <nithin.vo...@intel.com<mailto:nithin.vo...@intel.com>>
Reply-To: "us

Re: Enabling region replication on Phoenix metadata tables

2016-09-08 Thread Michael McAllister
Nithin

>
when I tried enabling region replication, I was not able to bring the HBase 
cluster.
>

I’m not sure what you mean here. Specifically referring to “bring the HBase 
cluster”.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D209D7.58A59170]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: "Vommi, Nithin" <nithin.vo...@intel.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Thursday, September 8, 2016 at 1:42 PM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: RE: Enabling region replication on Phoenix metadata tables

I agree with Michael on enabling region replication for the SYSTEM.CATALOG 
table. But when I tried enabling region replication, I was not able to bring 
the HBase cluster.  I am using HBase 1.2.1 and Phoenix 4.7 (on Amazon EMR 
Platform)

Regards
Nithin

From: Michael McAllister [mailto:mmcallis...@homeaway.com]
Sent: Thursday, September 8, 2016 11:39 AM
To: user@phoenix.apache.org
Subject: Enabling region replication on Phoenix metadata tables

Is there any reason why it would be a bad idea to enable region replication on 
the Phoenix metadata tables. Specifically, SYSTEM.CATALOG et al.

From everything I’m reading it seems like it would be a good idea. Those tables 
are a single point of failure for Phoenix. If they aren’t up then no Phoenix 
connection can be made.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D209D7.58A59170]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.


Enabling region replication on Phoenix metadata tables

2016-09-08 Thread Michael McAllister
Is there any reason why it would be a bad idea to enable region replication on 
the Phoenix metadata tables. Specifically, SYSTEM.CATALOG et al.

From everything I’m reading it seems like it would be a good idea. Those tables 
are a single point of failure for Phoenix. If they aren’t up then no Phoenix 
connection can be made.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D209D6.48E5B270]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.


Re: Extracting column values from Phoenix composite primary key

2016-08-30 Thread Michael McAllister
Not yet sorry.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D202D6.11003990]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Monday, August 29, 2016 at 9:48 PM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Extracting column values from Phoenix composite primary key

HI Michael and All ,

Did you get a chance to look into this ? Thanks.


Thanks.


On 26 August 2016 at 07:38, Anil 
<anilk...@gmail.com<mailto:anilk...@gmail.com>> wrote:
HI Michael,

Following the table create and upsert query -

CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT NULL, LABEL 
VARCHAR NOT NULL, DIRECTION VARCHAR(10) NOT NULL, REVERSETIME UNSIGNED_LONG NOT 
NULL, TARGET VARCHAR,"cf".ID VARCHAR, CONSTRAINT pk PRIMARY KEY(TYPE,SOURCE, 
LABEL, DIRECTION, REVERSETIME, TARGET)) COMPRESSION = 'SNAPPY';

upsert into SAMPLE(TYPE, SOURCE, LABEL, DIRECTION, REVERSETIME, TARGET, ID) 
values('test', 'src', 'label', 'direction', 134424245, 'target', 'id');
.
Thanks


On 25 August 2016 at 20:50, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Can you please provide the sample rowkey? It is blank in your email below. 
Alternatively, an UPSERT VALUES statement I can use to insert a row that I can 
work with myself.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D202D6.11003990]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com<mailto:anilk...@gmail.com>>
Reply-To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Date: Thursday, August 25, 2016 at 10:08 AM
To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Subject: Re: Extracting column values from Phoenix composite primary key

HI Michael,

Table create statement :

CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT NULL, LABEL 
VARCHAR NOT NULL, DIRECTION VARCHAR(10) NOT NULL, REVERSETIME UNSIGNED_LONG NOT 
NULL, TARGET VARCHAR,"CF".ID VARCHAR,CONSTRAINT PK PRIMARY KEY(TYPE,,SOURCE, 
LABEL, DIRECTION, REVERSETIME, TARGET)) COMPRESSION = 'SNAPPY'

No salt buckets defined.

Smaple table row key -

byte[] startRow = ByteUtil.concat(PVarchar.INSTANCE.toBytes("test"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("src"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("label"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("direction"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PUnsignedLong.INSTANCE.toBytes(1235464603853L),
PVarchar.INSTANCE.toBytes("target"));

i am trying to extract TARGET column. Thanks.

Regards,
Anil

On 25 August 2016 at 19:29, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Can you provide an example of one of the rowkeys, the values you are expecting 
out of it, and the full table definition? Of importance in the table definition 
will be whether you have salt buckets defined.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image003.png@01D202D6.11003990]
This electronic communication (including any attachment

Re: Extracting column values from Phoenix composite primary key

2016-08-25 Thread Michael McAllister
Can you please provide the sample rowkey? It is blank in your email below. 
Alternatively, an UPSERT VALUES statement I can use to insert a row that I can 
work with myself.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1FEBA.4B29AE50]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Thursday, August 25, 2016 at 10:08 AM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Extracting column values from Phoenix composite primary key

HI Michael,

Table create statement :

CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT NULL, LABEL 
VARCHAR NOT NULL, DIRECTION VARCHAR(10) NOT NULL, REVERSETIME UNSIGNED_LONG NOT 
NULL, TARGET VARCHAR,"CF".ID VARCHAR,CONSTRAINT PK PRIMARY KEY(TYPE,,SOURCE, 
LABEL, DIRECTION, REVERSETIME, TARGET)) COMPRESSION = 'SNAPPY'

No salt buckets defined.

Smaple table row key -

byte[] startRow = ByteUtil.concat(PVarchar.INSTANCE.toBytes("test"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("src"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("label"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("direction"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PUnsignedLong.INSTANCE.toBytes(1235464603853L),
PVarchar.INSTANCE.toBytes("target"));

i am trying to extract TARGET column. Thanks.

Regards,
Anil

On 25 August 2016 at 19:29, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Can you provide an example of one of the rowkeys, the values you are expecting 
out of it, and the full table definition? Of importance in the table definition 
will be whether you have salt buckets defined.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D1FEBA.4B29AE50]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com<mailto:anilk...@gmail.com>>
Reply-To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Date: Thursday, August 25, 2016 at 1:09 AM
To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Subject: Re: Extracting column values from Phoenix composite primary key

HI Michael,

Thank you for the response.

Unfortunately, that is not working.

Following is the snippet. one of the column is unsigned long and trying to 
extract the last column (with value target in the below code).
please correct me if i am doing wrong.

   byte SEPARATOR_BYTE = 0;
byte[] SEPARATOR_BYTE_ARRAY = { 0 };
byte[] startRow = ByteUtil.concat(
PVarchar.INSTANCE.toBytes("test"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("src"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("label"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("direction"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PUnsignedLong.INSTANCE.toBytes(1235464603853L),
PVarchar.INSTANCE.toBytes("target"));


List cols = new ArrayList();
int j = 0;
for (int i= 0 ; i < startRow.length ; i++){
if (startRow[i] == SEPARATOR_BYTE){
cols.add(Bytes.toString(startRow, j, i-j));
j = i+1;
}
}

cols.add(Bytes.toString(startRow, j, startRow.length - j));

System.out.println(cols.size());

String rowKey = Bytes.toString(startRow);
String[] columns = rowKey.split(&quo

Re: Extracting column values from Phoenix composite primary key

2016-08-25 Thread Michael McAllister
Can you provide an example of one of the rowkeys, the values you are expecting 
out of it, and the full table definition? Of importance in the table definition 
will be whether you have salt buckets defined.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1FEAF.09B0D210]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Thursday, August 25, 2016 at 1:09 AM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Extracting column values from Phoenix composite primary key

HI Michael,

Thank you for the response.

Unfortunately, that is not working.

Following is the snippet. one of the column is unsigned long and trying to 
extract the last column (with value target in the below code).
please correct me if i am doing wrong.

   byte SEPARATOR_BYTE = 0;
byte[] SEPARATOR_BYTE_ARRAY = { 0 };
byte[] startRow = ByteUtil.concat(
PVarchar.INSTANCE.toBytes("test"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("src"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("label"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PVarchar.INSTANCE.toBytes("direction"),
QueryConstants.SEPARATOR_BYTE_ARRAY,
PUnsignedLong.INSTANCE.toBytes(1235464603853L),
PVarchar.INSTANCE.toBytes("target"));


List cols = new ArrayList();
int j = 0;
for (int i= 0 ; i < startRow.length ; i++){
if (startRow[i] == SEPARATOR_BYTE){
cols.add(Bytes.toString(startRow, j, i-j));
j = i+1;
}
}

cols.add(Bytes.toString(startRow, j, startRow.length - j));

System.out.println(cols.size());

String rowKey = Bytes.toString(startRow);
String[] columns = rowKey.split("\u");

System.out.println(columns.length);

last entry in the array has special character too along with actual value.

Can you point out the bug in the above code if any. Thanks.

Regards,
Anil


On 25 August 2016 at 02:32, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Anil

If you split the rowkey on the zero byte character, you should end up with the 
individual columns that made up your primary key. Details are here:-

https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D1FEAF.09B0D210]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com<mailto:anilk...@gmail.com>>
Reply-To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Date: Wednesday, August 24, 2016 at 4:10 AM
To: "user@phoenix.apache.org<mailto:user@phoenix.apache.org>" 
<user@phoenix.apache.org<mailto:user@phoenix.apache.org>>
Subject: Re: Extracting column values from Phoenix composite primary key

Any inputs ? Thanks.

On 24 August 2016 at 11:13, Anil 
<anilk...@gmail.com<mailto:anilk...@gmail.com>> wrote:
Hi,

I have created primary key with columns in phoenix.
is there any way to extract the column values from hbase row key ? Please help.

Thanks,
Anil




Re: Extracting column values from Phoenix composite primary key

2016-08-24 Thread Michael McAllister
Anil

If you split the rowkey on the zero byte character, you should end up with the 
individual columns that made up your primary key. Details are here:-

https://phoenix.apache.org/faq.html#How_I_map_Phoenix_table_to_an_existing_HBase_table

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1FE20.E42A21B0]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Anil <anilk...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Wednesday, August 24, 2016 at 4:10 AM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Extracting column values from Phoenix composite primary key

Any inputs ? Thanks.

On 24 August 2016 at 11:13, Anil 
<anilk...@gmail.com<mailto:anilk...@gmail.com>> wrote:
Hi,

I have created primary key with columns in phoenix.
is there any way to extract the column values from hbase row key ? Please help.

Thanks,
Anil



Re: Tables can have schema name but indexes cannot

2016-08-12 Thread Michael McAllister
James

Thanks – looks like I was misled by DBVisualizer. The underlying hbase index 
tables automatically have the parent table’s schema name prepended, which is 
perfect. For some reason in the DBVisualizer object browser the indexes don’t 
show up in the correct schema, they’re showing up in a schema named (null).

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1F48E.CF258110]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: James Taylor <jamestay...@apache.org>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Friday, August 12, 2016 at 10:56 AM
To: user <user@phoenix.apache.org>
Subject: Re: Tables can have schema name but indexes cannot

Hi Michael,
SQL dictates that an index must be in the same schema as the table it's 
indexing.
Thanks,
James

On Fri, Aug 12, 2016 at 8:50 AM, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Hi

Is there any reason we can specify the schema name for a table, but not an 
index. I note that the grammar online makes it clear this isn’t part of the 
syntax, but it would be nice if we could do it. To illustrate what I’d like:-

-- Create the table

CREATE TABLE IF NOT EXISTS MMCALLISTER.TEST
( c1 INTEGER NOT NULL
, c2 VARCHAR NULL
, c3 VARCHAR NULL
, CONSTRAINT TEST_PK PRIMARY KEY (c1)
);

-- This does not work

CREATE INDEX IF NOT EXISTS MMCALLISTER.TEST_IX01
ON MMCALLISTER.TEST
( c2 )
INCLUDE ( c3 );

-- This works

CREATE INDEX IF NOT EXISTS TEST_IX01
ON MMCALLISTER.TEST
( c2 )
INCLUDE ( c3 );

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 
512.423.7447 | skype: 
michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image002.png@01D1F48E.CF258110]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



Re: Phoenix Ifnull

2016-08-12 Thread Michael McAllister
Seeing as we’re talking COALESCE and NULLs, depending on the version Ankit is 
running, this could also be the issue in PHOENIX-2994:-

https://issues.apache.org/jira/browse/PHOENIX-2994

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[cid:image001.png@01D1F472.F62D5470]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Lukáš Lalinský <lalin...@gmail.com>
Reply-To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Date: Friday, August 12, 2016 at 4:57 AM
To: "user@phoenix.apache.org" <user@phoenix.apache.org>
Subject: Re: Phoenix Ifnull

I think this is a problem with the WHERE clause. NULL values are neither equal 
nor not-equal to any other values. You might need to add "OR API_KEY IS NULL" 
to the WHERE clause.

Lukas


On Fri, Aug 12, 2016 at 9:51 AM, ankit beohar 
<ankitbeoha...@gmail.com<mailto:ankitbeoha...@gmail.com>> wrote:
Hi Lukáš/James,

I have one table in which only one rowkey is available and for my null check 
case I am firing below queries and in that null check is not working:-

[nline image 1]

Please see and let me know if I am doing right thing or missed something.

Best Regards,
ANKIT BEOHAR


On Thu, Aug 11, 2016 at 9:18 PM, James Taylor 
<jamestay...@apache.org<mailto:jamestay...@apache.org>> wrote:
Lukáš is correct, but if the CASE WHEN variant of the same didn't work, please 
file a JIRA (ideally with a unit test that repros the problem).

Thanks,
James

On Thu, Aug 11, 2016 at 12:20 AM, Lukáš Lalinský 
<lalin...@gmail.com<mailto:lalin...@gmail.com>> wrote:
On Thu, Aug 11, 2016 at 9:00 AM, ankit beohar 
<ankitbeoha...@gmail.com<mailto:ankitbeoha...@gmail.com>> wrote:
I want to implement If null in apache phoenix like mysql.

select ifnull(rowkey,myvalue),ifnull(col1,mycolvalue) from table where 
rowkey='abc';

I tried below query but did not work:-

select case when APP_KEY is null then 'nullvalue' else APP_KEY end from 
"CheckDump";

The standard SQL function for this is coalesce(col, 'nullvalue').

https://phoenix.apache.org/language/functions.html#coalesce

Lukas





RE: Coalesce function returns nulls?

2016-06-13 Thread Michael McAllister
Perhaps you could create a test case with table creation, upsert and then the 
select that reproduces and illustrates the problem?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Marek Wiewiorka [mailto:marek.wiewio...@gmail.com]
Sent: Monday, June 13, 2016 12:58 PM
To: user@phoenix.apache.org
Subject: Coalesce function returns nulls?

Hi All - I came across a very strange issue when using COALESCE function.
It appears to me that it returns null values - please consider a following 
query:
select length(coalesce(V_RS_DBSNP142,'N/A')) from TABLE1 limit 10;
+--+
| LENGTH(COALESCE("V_RS_DBSNP142", 'N/A')) |
+--+
| null |
| null |
| null |
| null |
| null |
| null |
| null |
| null |
| null |
| null |
+--+
10 rows selected (0.056 seconds)
Am I doing sth wrong or there is a bug in that function?
I'm using Phoenix 4.6.

Many thanks in advance.
Marek



Adding table compression

2016-03-19 Thread Michael McAllister
All

Are there any known issues if we use the hbase shell to alter a phoenix table 
to apply compression? We're currently using Phoenix 4.4 on HDP 2.3.4.

I plan on testing, but also want to double check for any gotchas.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447
[Description: Description: cid:3410354473_30269081]


RE: Re: HBase Phoenix Integration

2016-03-01 Thread Michael McAllister
Technically, it depends on which version of HDP you are on. Here are the 
versions:-

HDP 2.1 = Apache Phoenix 4.0
HDP 2.2 = Apache Phoenix 4.2
HDP 2.3 = Apache Phoenix 4.4
HDP 2.4 = Apache Phoenix 4.4

(From this page -> http://hortonworks.com/hdp/whats-new/)

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Bulvik, Noam [mailto:noam.bul...@teoco.com]
Sent: Tuesday, March 01, 2016 5:10 AM
To: user@phoenix.apache.org
Subject: RE: Re: HBase Phoenix Integration

4.4

From: Dor Ben Dov [mailto:dor.ben-...@amdocs.com]
Sent: Tuesday, March 1, 2016 12:34 PM
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: RE: Re: HBase Phoenix Integration

Any one knows here which version of Phoenix being used in HortonWorks bundle ?
Dor

From: Amit Shah [mailto:amits...@gmail.com]
Sent: יום ג 01 מרץ 2016 12:33
To: user@phoenix.apache.org<mailto:user@phoenix.apache.org>
Subject: Re: Re: HBase Phoenix Integration

Hi Sun,

In my deployment I have only one zookeeper node. I do not have cluster. What 
was your command line string to connect? Did you specify the port number?
I specify this ./sqlline.py :2181

Regards,
Amit.

On Tue, Mar 1, 2016 at 3:38 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
Hi, Amit
I had successfully built the git repo according to your file change. But when I 
try to use sqlline to connect to Phoenix,
I ran into the following error:

while dev-1,dev-2,dev-3 are my zookeeper hosts. I can still see the 
regionservers being killed abnormally.

Had you met this issue when you use in your scenerio ? If so, please suggest 
how to resolve this.

Thanks.

 Connecting to jdbc:phoenix:dev-1,dev-2,dev-3
16/03/01 18:04:17 WARN util.NativeCodeLoader: Unable to load native-hadoop 
library for your platform... using builtin-java classes where applicable
Error: Failed after attempts=36, exceptions:
Tue Mar 01 18:05:30 CST 2016, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' on 
table 'hbase:meta' at region=hbase:meta,,1.1588230740, 
hostname=dev-2,60020,1456826584858, seqNum=0 (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, 
exceptions:
Tue Mar 01 18:05:30 CST 2016, null, java.net.SocketTimeoutException: 
callTimeout=6, callDuration=69350: row 'SYSTEM.SEQUENCE,,00' on 
table 'hbase:meta' at region=hbase:meta,,1.1588230740, 
hostname=dev-2,60020,1456826584858, seqNum=0



From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 18:00
To: user<mailto:user@phoenix.apache.org>
Subject: Re: Re: HBase Phoenix Integration
Sure Sun.
PFA.

Regards,
Amit.

On Tue, Mar 1, 2016 at 2:58 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
Hi Amit,
Glad you found a temporory  fix for that. Can you share the relative java file 
you modified ?
Thanks a lot.

Best,
Sun.




From: Amit Shah<mailto:amits...@gmail.com>
Date: 2016-03-01 17:22
To: user<mailto:user@phoenix.apache.org>
Subject: Re: RE: HBase Phoenix Integration
Hi All,

I got some success in deploying phoenix 4.6-HBase-1.0 on CDH 5.5.2. I resolved 
the compilation errors by commenting out the usage of 
BinaryCompatibleIndexKeyValueDecoder and 
BinaryCompatibleCompressedIndexKeyValueDecoder classes since they only get used 
in secondary indexing. This is a temporary fix but yes it works !

Hope that helps.
Waiting to see the phoenix-cloudera fix for the latest phoenix version 4.7 
especially since 4.7 has some new features.

Thanks,
Amit.

On Tue, Mar 1, 2016 at 2:13 PM, Fulin Sun 
<su...@certusnet.com.cn<mailto:su...@certusnet.com.cn>> wrote:
No idea. I had only searched for this relatively latest post for supporting 
phoenix with CDH 5.5.x
However, I cannot connect to phoenix according to the post guide. Compiling the 
git repo had also
give me no luck.





From: Dor Ben Dov<mailto:dor.ben-...@amdocs.com>
Date: 2016-03-01 16:39
To: user@phoenix.apache.org<mailto:user@phoenix.a

RE: Natural Sort order of columns

2016-01-27 Thread Michael McAllister
Use ORDER BY in your select clause:-

https://phoenix.apache.org/language/#select

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Ns G [mailto:nsgns...@gmail.com]
Sent: Wednesday, January 27, 2016 6:50 AM
To: user@phoenix.apache.org
Subject: REG: Natural Sort order of columns

Hi There,
Can I get any phoenix column in Natural sort order?
For eg:
If I have column with values as
test11
test500
42
test9
test3
1
51
then the output should be like

1
42
51
test3
test9
test11
test500


any suggestions please?
Thanks,

PS: I have used AlphonumComparator class referred from [ 
http://www.davekoelle.com/alphanum.html ] which solves the requirementof  
sorting a java collection using Collections.sort method.


Phoenix and HBase HA

2015-10-15 Thread Michael McAllister
Hi

Am I correct in my reading that if we enable HA on a Hbase table Phoenix is 
using, and we are on HBase 0.98, that Phoenix will no longer be able to read 
from it? This question is partially based on the following:-

https://issues.apache.org/jira/plugins/servlet/mobile#issue/PHOENIX-1683

... however I was thinking that HA was enabled on HBase tables simply by 
setting region replication to 2 or 3?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



Re: Number of regions in SYSTEM.SEQUENCE

2015-09-22 Thread Michael McAllister
Mujtaba

Thanks for this information. Seeing as I am using Phoenix 4.2, what is the safe 
and approved sequence of steps to drop this table and recreate it as you 
mention? Additionally, how do we ensure we don’t lose sequence data?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Sep 22, 2015, at 1:32 PM, Mujtaba Chohan 
<mujt...@apache.org<mailto:mujt...@apache.org>> wrote:

Since Phoenix 4.5.x default has been changed for phoenix.sequence.saltBuckets 
to not split sequence table. See 
this<https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java;h=79776e7f688fc700275d0502e31646afe2bbcb1e;hp=4e8879b1b7a6358db2c1f9ccb4fa169394fec721;hb=18e52cc4ce2384bdc7a9c72d63901058e40f04ae;hpb=b82c5cbccdf4eb944238e69a514841be361bfb6d>
 commit. For older versions you can drop sequence table and reconnect with 
setting client side phoenix.sequence.saltBuckets property.

On Tue, Sep 22, 2015 at 11:14 AM, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Hi

By default SYSTEM.SEQUENCE is installed with 256 regions. In an environment 
where you don’t have a large number of tables and regions (yet), the end result 
of this seems to be that with hbase balance_switch=true, you end up with a lot 
of region servers with nothing but empty SYSTEM.SEQUENCE regions on them. That 
mans inefficient use of our cluster.

Have there been any best practices developed as to how to deal with this 
situation?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 
512.423.7447 | skype: 
michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex


This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.





Number of regions in SYSTEM.SEQUENCE

2015-09-22 Thread Michael McAllister
Hi

By default SYSTEM.SEQUENCE is installed with 256 regions. In an environment 
where you don’t have a large number of tables and regions (yet), the end result 
of this seems to be that with hbase balance_switch=true, you end up with a lot 
of region servers with nothing but empty SYSTEM.SEQUENCE regions on them. That 
mans inefficient use of our cluster.

Have there been any best practices developed as to how to deal with this 
situation?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



Re: Number of regions in SYSTEM.SEQUENCE

2015-09-22 Thread Michael McAllister
OK - so the traditional methods of recreating sequences, that makes sense.

Interestingly btw, at least from within Phoenix I can’t see the content of 
SYSTEM.SEQUENCE. I get the following error:-

0: jdbc:phoenix:redacted,> select count(*) from system.sequence;
Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "NAME", got 
"sequence" at line 1, column 29. (state=42P00,code=604)

I do understand this is a system table, but it would be nice to see inside it. 
This is from Apache Phoenix 4.2 on HDP 2.2.6.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com> | C: 512.423.7447 | 
skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Sep 22, 2015, at 2:47 PM, James Heather 
<james.heat...@mendeley.com<mailto:james.heat...@mendeley.com>> wrote:

If no one else will be hitting the table while you complete the operation, and 
if you don't mind about missing a few sequence values (i.e., having a gap), you 
should just need the following.

SELECT NEXT VALUE FOR sequencename FROM sometable;

That will tell you the next value the sequence wants to hand out.

DROP SEQUENCE sequencename;

Then reconnect with the property as given below, and

CREATE SEQUENCE sequencename START WITH n;

where n is the value you retrieved in the first step.

The reason this might cause gaps is that client connections will cache sequence 
values, so the one you retrieve might not actually be the first one that hasn't 
been used; it'll just be the first one cached by the connection you're using. 
But if you do it this way, and nothing else is connected in the meantime, then 
you won't get any duplicates.

As far as I can see, if you're the only connected client, this should do it 
with no gaps: no other clients will have cached any sequence values, so you'll 
retrieve the first one your connection has cached (which will be the first one 
available), and then that's where your sequence will start when you recreate 
the sequence. But I'm not absolutely certain about that, and you might want to 
try some experiments.

If the sequence is being used for a primary key column (a sort of 
auto_increment), then the other option is to

SELECT MAX(id) FROM sometable;

and then add one to this value to determine where the recreated sequence should 
start. That will ensure no gaps.

James


On 22/09/15 19:47, Michael McAllister wrote:
Mujtaba

Thanks for this information. Seeing as I am using Phoenix 4.2, what is the safe 
and approved sequence of steps to drop this table and recreate it as you 
mention? Additionally, how do we ensure we don’t lose sequence data?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
<mailto:mmcallis...@homeaway.com>mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>
 | C: 512.423.7447 | skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> 
| webex: https://h.a/mikewebex


This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Sep 22, 2015, at 1:32 PM, Mujtaba Chohan 
<mujt...@apache.org<mailto:mujt...@apache.org>> wrote:

Since Phoenix 4.5.x default has been changed for phoenix.sequence.saltBuckets 
to not split sequence table. See 
this<https://git-wip-us.apache.org/repos/asf?p=phoenix.git;a=blobdiff;f=phoenix-core/src/main/java/org/apache/phoenix/query/QueryServicesOptions.java;h=79776e7f688fc700275d0502e31646afe2bbcb1e;hp=4e8879b1b7a6358db2c1f9ccb4fa169394fec721;hb=18e52cc4ce2384bdc7a9c72d63901058e40f04ae;hpb=b82c5cbccdf4eb944238e69a514841be361bfb6d>
 commit. For older versions you can drop sequence table and reconnect with 
setting client side phoenix.sequence.saltBuckets property.

On Tue, Sep 22, 2015 at 11:14 AM, Michael McAllister 
<mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>> wrote:
Hi

By default SYSTEM.SEQUENCE is installed with 256 regions. In an environment 

Re: Number of regions in SYSTEM.SEQUENCE

2015-09-22 Thread Michael McAllister
Thanks, that one worked :-)

On Sep 22, 2015, at 3:28 PM, James Heather 
<james.heat...@mendeley.com<mailto:james.heat...@mendeley.com>> wrote:

Try the one you missed:

SYSTEM."SEQUENCE"

i.e., quote the bits separately (but SYSTEM doesn't need quoting), and put it 
in caps.

James

On 22/09/15 21:18, Michael McAllister wrote:
More failed attempts ...

0: jdbc:phoenix:redacted,> select count(*) from system."sequence";
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.sequence 
(state=42M03,code=1012)
0: jdbc:phoenix:redacted,> select count(*) from "system.sequence";
Error: ERROR 1012 (42M03): Table undefined. tableName=system.sequence 
(state=42M03,code=1012)
0: jdbc:phoenix:redacted,> select count(*) from "SYSTEM.SEQUENCE";
Error: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.SEQUENCE 
(state=42M03,code=1012)

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
<mailto:mmcallis...@homeaway.com>mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>
 | C: 512.423.7447 | skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> 
| webex: https://h.a/mikewebex


This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Sep 22, 2015, at 3:14 PM, James Heather 
<james.heat...@mendeley.com<mailto:james.heat...@mendeley.com>> wrote:

I don't think it's trying to stop you looking inside the table. I think it's 
complaining that SEQUENCE is a keyword, and shouldn't be appearing there.

You could try quoting it.

James

On 22/09/15 21:11, Michael McAllister wrote:
OK - so the traditional methods of recreating sequences, that makes sense.

Interestingly btw, at least from within Phoenix I can’t see the content of 
SYSTEM.SEQUENCE. I get the following error:-

0: jdbc:phoenix:redacted,> select count(*) from system.sequence;
Error: ERROR 604 (42P00): Syntax error. Mismatched input. Expecting "NAME", got 
"sequence" at line 1, column 29. (state=42P00,code=604)

I do understand this is a system table, but it would be nice to see inside it. 
This is from Apache Phoenix 4.2 on HDP 2.2.6.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
<mailto:mmcallis...@homeaway.com><mailto:mmcallis...@homeaway.com>mmcallis...@homeaway.com<mailto:mmcallis...@homeaway.com>
 | C: 512.423.7447 | skype: michael.mcallister.ha<mailto:zimmk...@hotmail.com> 
| webex: <https://h.a/mikewebex> https://h.a/mikewebex


This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Sep 22, 2015, at 2:47 PM, James Heather 
<james.heat...@mendeley.com<mailto:james.heat...@mendeley.com>> wrote:

If no one else will be hitting the table while you complete the operation, and 
if you don't mind about missing a few sequence values (i.e., having a gap), you 
should just need the following.

SELECT NEXT VALUE FOR sequencename FROM sometable;

That will tell you the next value the sequence wants to hand out.

DROP SEQUENCE sequencename;

Then reconnect with the property as given below, and

CREATE SEQUENCE sequencename START WITH n;

where n is the value you retrieved in the first step.

The reason this might cause gaps is that client connections will cache sequence 
values, so the one you retrieve might not actually be the first one that hasn't 
been used; it'll just be the first one cached by the connection you're using. 
But if you do it this way, and nothing else is connected in the meantime, then 
you won't get any duplicates.

As far as I can see, if you're the only connected client, this should do it 
with no gaps: no other clients will have cached any sequence values, so you'll 
retrieve the first one your connection has cached (which will be the first one 
available), and then that's where your sequence will start when you recreate 
the sequence. But I'm not absolutely certain about that, and you might want to 
try some experiments.

If the sequence is being used for a primary key column (a sort of 
auto_increment), then the other option is to

SELECT MAX(id) F

Re: sqlline error while creating table

2015-09-01 Thread Michael McAllister
I think you need a comma between your column definition and your constraint 
definition.


On Sep 1, 2015, at 2:54 PM, Serega Sheypak 
> wrote:

Hi, I wrote itegration test that uses HBasetesting utility and phoenix. Test 
creates table and inserts data. It works fine.
I'm trying to run

CREATE TABLE IF NOT EXISTS cross_id_attributes
(
   crossIdVARCHAR   NOT NULL
   CONSTRAINT cross_id_reference_pk  PRIMARY KEY (crossId)
)SALT_BUCKETS=30, DATA_BLOCK_ENCODING='SNAPPY',VERSIONS=1, TTL=691200;

agains production cluster using sqlline and it hungs. When I try to abort it, I 
see some exception in console:


Exception in thread "SIGINT handler" java.lang.RuntimeException: 
java.sql.SQLFeatureNotSupportedException


 at 
sqlline.SunSignalHandler.handle(SunSignalHandler.java:43)

  at sun.misc.Signal$1.run(Signal.java:212)

 at 
java.lang.Thread.run(Thread.java:745)


Caused by: java.sql.SQLFeatureNotSupportedException


What it could be, what do I do wrong?



Baseline test a query with no cache

2015-08-27 Thread Michael McAllister
Hi

I am trying to test the performance of a query, and I want to make sure I am 
not getting cached results from a previous call. Is there anything to be done 
to clear any notional cache?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



RE: Error thrown when trying to drop table

2015-08-26 Thread Michael McAllister
OK, and then delete the table within HBase as well I assume?


From: James Taylor [mailto:jamestay...@apache.org]
Sent: Tuesday, August 25, 2015 6:18 PM
To: user
Subject: Re: Error thrown when trying to drop table

I think you'll need to resort to manually deleting from the SYSTEM.CATALOG 
table. Something like the following:

DELETE FROM SYSTEM.CATALOG WHERE TENANT_ID IS NULL
AND TABLE_SCHEM = 'MYSCHEMA'
AND TABLE_NAME = 'MYTABLE';

You'll need to bounce your cluster to cause the server-side cache to be cleared 
too.

On Tue, Aug 25, 2015 at 10:29 AM, Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
James

We’re on Phoenix 4.2 on HDP 2.2.6.

I’ve tried reproducing the problem and I can’t. My issue is specific to these 
two tables. When I try recreating the exact same table I can drop it. I’m 
willing to do some digging on these tables if it will help you, but in the end 
what I’d like to do is get them dropped.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 
512.423.7447tel:512.423.7447 | skype: 
michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image001.png@01D0DFE4.60ED0CF0]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Aug 24, 2015, at 10:48 PM, James Taylor 
jamestay...@apache.orgmailto:jamestay...@apache.org wrote:

Hi Michael,
What version are you on over there? Can you setup a test that consistently 
reproduces the issue?
Thanks,
James

On Mon, Aug 24, 2015 at 8:20 PM, Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Hi

I have an empty table that, when I try and drop, I get an error returned.

I issue a command similar to the following:-

drop table myschema.mytable;

I get the following error:-

 22:17:53  [DROP - 0 row(s), 0.276 secs]  [Error Code: 101, SQL State: 08000]  
org.apache.hadoop.hbase.DoNotRetryIOException: MYSCHEMA.MYTABLE: 29
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:965)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:7768)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6896)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3420)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3402)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 29
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:320)
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:250)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:240)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:638)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:376)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:985)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:939)
... 10 more
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.276/0.000 
sec  [0 successful, 0 warnings, 1 errors]


Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 
512.423.7447tel:512.423.7447 | skype: 
michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

image002.png
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.






Re: Error thrown when trying to drop table

2015-08-25 Thread Michael McAllister
James

We’re on Phoenix 4.2 on HDP 2.2.6.

I’ve tried reproducing the problem and I can’t. My issue is specific to these 
two tables. When I try recreating the exact same table I can drop it. I’m 
willing to do some digging on these tables if it will help you, but in the end 
what I’d like to do is get them dropped.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Aug 24, 2015, at 10:48 PM, James Taylor 
jamestay...@apache.orgmailto:jamestay...@apache.org wrote:

Hi Michael,
What version are you on over there? Can you setup a test that consistently 
reproduces the issue?
Thanks,
James

On Mon, Aug 24, 2015 at 8:20 PM, Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Hi

I have an empty table that, when I try and drop, I get an error returned.

I issue a command similar to the following:-

drop table myschema.mytable;

I get the following error:-

 22:17:53  [DROP - 0 row(s), 0.276 secs]  [Error Code: 101, SQL State: 08000]  
org.apache.hadoop.hbase.DoNotRetryIOException: MYSCHEMA.MYTABLE: 29
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:965)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:7768)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6896)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3420)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3402)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 29
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:320)
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:250)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:240)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:638)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:376)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:985)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:939)
... 10 more
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.276/0.000 
sec  [0 successful, 0 warnings, 1 errors]


Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 
512.423.7447tel:512.423.7447 | skype: 
michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

image002.png
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.





Error thrown when trying to drop table

2015-08-24 Thread Michael McAllister
Hi

I have an empty table that, when I try and drop, I get an error returned.

I issue a command similar to the following:-

drop table myschema.mytable;

I get the following error:-

 22:17:53  [DROP - 0 row(s), 0.276 secs]  [Error Code: 101, SQL State: 08000]  
org.apache.hadoop.hbase.DoNotRetryIOException: MYSCHEMA.MYTABLE: 29
at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:965)
at 
org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:7768)
at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:6896)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3420)
at 
org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3402)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:29998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2078)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:108)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:114)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:94)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 29
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:320)
at org.apache.phoenix.schema.PTableImpl.init(PTableImpl.java:250)
at org.apache.phoenix.schema.PTableImpl.makePTable(PTableImpl.java:240)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.getTable(MetaDataEndpointImpl.java:638)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.buildTable(MetaDataEndpointImpl.java:376)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.doDropTable(MetaDataEndpointImpl.java:985)
at 
org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropTable(MetaDataEndpointImpl.java:939)
... 10 more
... 1 statement(s) executed, 0 row(s) affected, exec/fetch time: 0.276/0.000 
sec  [0 successful, 0 warnings, 1 errors]


Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.



Re: Date math

2015-08-04 Thread Michael McAllister
James

Thanks for writing back. Some feedback:-


Not sure if this is in 4.2, but Phoenix supports casting a numeric to a date.


So when I am referring to a numeric version of a date, I’m referring to a date 
that has been formatted as MMDD, and then put in an integer. I am not 
talking about a date format where the date is a number of seconds since some 
base date. Just want to be clear. The reason is that when I tried the following 
…


SELECT CAST(CAST(col * 1000 AS BIGINT) AS DATE) ...


… I did not get expected results. Some examples:-

select 20150101 as DateNumId
,CAST(CAST(20150101 * 1000 AS BIGINT) AS DATE) as Test1
,CAST(CAST(20150101 AS BIGINT) AS DATE) as Test2
from system.catalog
limit 1;

DATENUMID  TEST1   TEST2
-  --  --
20150101   1970-08-22  1969-12-31


A lot of date function were added in 4.4 that would help you if you need to 
extract the day/month/year. Perhaps you can backport them or upgrade?


Unfortunately I don’t think I have this option.

What I can do is cheat though. Given my table will only have records with the 
last day of each month in it, this should return the rows I want:-

select *
from mmcallister.TestDateMath
where Product = 'MyProduct'
and DateNumId in (20160331,20160331-10001,20160331-1,20160331-);

(Full test case setup attached)

Regards

Mike


test_phoenix_date_math.sql
Description: test_phoenix_date_math.sql
On Aug 4, 2015, at 3:45 PM, James Taylor jamestay...@apache.org wrote:Not sure if this is in 4.2, but Phoenix supports casting a numeric to a date. You'd need to do this in two steps, though, as we support INTEGER - BIGINT and then BIGINT - DATE. like this:SELECT CAST(CAST(col * 1000 AS BIGINT) AS DATE) ...A lot of date function were added in 4.4 that would help you if you need to extract the day/month/year. Perhaps you can backport them or upgrade?One more built-in that might help you is the TRUNC function which lets you "bucketize" based on HOUR, MINUTE, SECOND, DAY, etc.Thanks,JamesOn Tue, Aug 4, 2015 at 12:14 PM, Michael McAllister mmcallis...@homeaway.com wrote:






HiI have a table with a date stored in it. The date is always the last day of a month. For example, 31-AUG-2015 or 28-FEB-2015. I have the date stored in two separate columns. In one place it is an integer (20150831 and 20150228). In another
 place it uses the DATE datatype.I want to write a SQL statement where, given a date (ignore which datatype at the moment), I can return rows for that date, as well as the same date last year, all in one year. The kicker is that we have to interpret the date as a month
 end, so we have to factor leap years and February the 29th in.If we didn’t have to deal with the leap year the SQL could be as simple as:-SELECT columnsFROM tableWHERE datenum in (datenum,datenum-1);However, this won’t work for 29-FEB-2016 as (20160229-1 = 20150229)Additionally, we’re on Phoenix 4.2, so we don’t have access to UDFs.So … any ideas how to resolve this query? Is there some built in date math available to me that I can’t find in the documentation online?RegardsMike






Date math

2015-08-04 Thread Michael McAllister
Hi

I have a table with a date stored in it. The date is always the last day of a 
month. For example, 31-AUG-2015 or 28-FEB-2015. I have the date stored in two 
separate columns. In one place it is an integer (20150831 and 20150228). In 
another place it uses the DATE datatype.

I want to write a SQL statement where, given a date (ignore which datatype at 
the moment), I can return rows for that date, as well as the same date last 
year, all in one year. The kicker is that we have to interpret the date as a 
month end, so we have to factor leap years and February the 29th in.

If we didn't have to deal with the leap year the SQL could be as simple as:-

SELECT columns
FROM table
WHERE datenum in (datenum,datenum-1);

However, this won't work for 29-FEB-2016 as (20160229-1 = 20150229)

Additionally, we're on Phoenix 4.2, so we don't have access to UDFs.

So ... any ideas how to resolve this query? Is there some built in date math 
available to me that I can't find in the documentation online?

Regards

Mike



Re: REG: Update -Set in Phoenix

2015-07-15 Thread Michael McAllister
Yes it’s true as far as I can tell, you’ll at least have to get the PK columns 
as well.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Jul 14, 2015, at 8:53 PM, Ns G 
nsgns...@gmail.commailto:nsgns...@gmail.com wrote:


Michael,

For using upsert, I need to fetch the primary ID at the least and then do 
processing and update the columns. I need to read the entire table to fetch 
primary keys and then do processing at service layer and do an upsert, which I 
think is quite a task for huge table.

Please let me know your thoughts,

Thanks,
Satya

On 14-Jul-2015 7:44 pm, Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Satya

Try the UPSERT SELECT statement:-

https://phoenix.apache.org/language/#upsert_select

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

image002.png
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Jul 14, 2015, at 5:09 AM, Ns G 
nsgns...@gmail.commailto:nsgns...@gmail.com wrote:

HI There,

I have a query like one below

UPDATE table_1
SETid1 = (SELECT Min(id1)
  FROM   table_1 t2
  WHERE  table_1.id3 = t2.id3)
WHERE  id3 = id4

I am trying to implement this in Apache Phoenix. Can anyone suggest how should 
i be doing this as we do not have set command in phoenix. I am using 4.3.1 
version.


Thanks,

Satya








Re: REG: Update -Set in Phoenix

2015-07-14 Thread Michael McAllister
Satya

Try the UPSERT SELECT statement:-

https://phoenix.apache.org/language/#upsert_select

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Jul 14, 2015, at 5:09 AM, Ns G 
nsgns...@gmail.commailto:nsgns...@gmail.com wrote:

HI There,

I have a query like one below

UPDATE table_1
SETid1 = (SELECT Min(id1)
  FROM   table_1 t2
  WHERE  table_1.id3 = t2.id3)
WHERE  id3 = id4

I am trying to implement this in Apache Phoenix. Can anyone suggest how should 
i be doing this as we do not have set command in phoenix. I am using 4.3.1 
version.


Thanks,

Satya







Delete statement reports row deleted when no rows deleted

2015-07-07 Thread Michael McAllister
All

We have just noticed that when you issue the same delete statement twice, the 
delete statement reports rows deleted the second time, when there are in fact 
no rows available to delete.

I have created a test case and attached it with a log of the results I am 
seeing. Note specifically in the attached log:-

Line 33 - first delete
Line 39 - shows delete was successful as count(*) returns no rows for the key 
deleted.
Line 53 - second delete reports 1 row affected

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

This electronic communication (including any attachment) is confidential. If you are not an intended recipient of this communication, please be advised that any disclosure, dissemination, distribution, copying or other use of this communication or any attachment is strictly prohibited. If you have received this communication in error, please notify the sender immediately by reply e-mail and promptly destroy all electronic and printed copies of this communication and any attachment.



delete_test_case.sql
Description: delete_test_case.sql


delete_test_case.log
Description: delete_test_case.log


RE: Problem in finding the largest value of an indexed column

2015-06-26 Thread Michael McAllister
Yufan

Have you tried using the EXPLAIN command to see what plan is being used to 
access the data?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Yufan Liu [mailto:yli...@kent.edu]
Sent: Friday, June 26, 2015 6:31 PM
To: user@phoenix.apache.org
Subject: Problem in finding the largest value of an indexed column

Hi,
We have created a table (eg, t1), and a global index of one numeric column of 
t1 (eg, timestamp). Now we want to find the largest value of timestamp, we have 
tried two approaches:

1. select max(timestamp) from t1; This query takes forever to finish, so I 
think it maybe doing a full table scan/comparison .
2. select timestamp from t1 order by timestamp desc limit 1; This query 
finished fast, but the result it returns is far from the largest value. It 
seems it just return the largest value for a certain range of data.
Did anyone else encounter this issue/have any suggestion?

--
Thanks,
Yufan


RE: Problem in finding the largest value of an indexed column

2015-06-26 Thread Michael McAllister
OK, I’m a Phoenix newbie, so that was the extent of the advice I could give 
you. There are people here far more experienced than I am who should be able to 
give you deeper advice. Have a great weekend!

Mike

From: Yufan Liu [mailto:yli...@kent.edu]
Sent: Friday, June 26, 2015 7:19 PM
To: user@phoenix.apache.org
Subject: Re: Problem in finding the largest value of an indexed column

Hi Michael,
Thanks for the advice, for the first one, it's CLIENT 67-CHUNK PARALLEL 1-WAY 
FULL SCAN OVER TIMESTAMP_INDEX; SERVER FILTER BY FIRST KEY ONLY; SERVER 
AGGREGATE INTO SINGLE ROW which is as expected. For the second one, it's 
CLIENT 67-CHUNK SERIAL 1-WAY REVERSE FULL SCAN OVER TIMESTAMP_INDEX; SERVER 
FILTER BY FIRST KEY ONLY; SERVER 1 ROW LIMIT which looks correct, but still 
returns the unexpected result.

2015-06-26 16:59 GMT-07:00 Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com:
Yufan

Have you tried using the EXPLAIN command to see what plan is being used to 
access the data?

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 
512.423.7447tel:512.423.7447 | skype: 
michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex
[Description: Description: cid:3410354473_30269081]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

From: Yufan Liu [mailto:yli...@kent.edumailto:yli...@kent.edu]
Sent: Friday, June 26, 2015 6:31 PM
To: user@phoenix.apache.orgmailto:user@phoenix.apache.org
Subject: Problem in finding the largest value of an indexed column

Hi,
We have created a table (eg, t1), and a global index of one numeric column of 
t1 (eg, timestamp). Now we want to find the largest value of timestamp, we have 
tried two approaches:

1. select max(timestamp) from t1; This query takes forever to finish, so I 
think it maybe doing a full table scan/comparison .
2. select timestamp from t1 order by timestamp desc limit 1; This query 
finished fast, but the result it returns is far from the largest value. It 
seems it just return the largest value for a certain range of data.
Did anyone else encounter this issue/have any suggestion?

--
Thanks,
Yufan



--
best,
Yufan


Re: count distinct

2015-06-23 Thread Michael McAllister

Your second query should work with Phoenix 4.3 or later.


Thanks, unfortunately at the moment I’m stuck with Phoenix 4.2.


I will investigate the problem with the first one and get back to you.


Appreciate this.

Michael McAllister
Staff Data Warehouse Engineer | Decision Systems
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com | C: 512.423.7447 | 
skype: michael.mcallister.hamailto:zimmk...@hotmail.com | webex: 
https://h.a/mikewebex

[cid:image002.png@01D080DC.77AD4930]
This electronic communication (including any attachment) is confidential.  If 
you are not an intended recipient of this communication, please be advised that 
any disclosure, dissemination, distribution, copying or other use of this 
communication or any attachment is strictly prohibited.  If you have received 
this communication in error, please notify the sender immediately by reply 
e-mail and promptly destroy all electronic and printed copies of this 
communication and any attachment.

On Jun 23, 2015, at 7:54 PM, Maryann Xue 
maryann@gmail.commailto:maryann@gmail.com wrote:

Sorry, I missed the first line. Your second query should work with Phoenix 4.3 
or later.


I will investigate the problem with the first one and get back to you.


Thanks,
Maryann

On Tuesday, June 23, 2015, Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Hi

(This questions relates to Phoenix 4.2 on HDP 2.2)

I have a situation where I want to count the distinct combination of a couple 
of columns.

When I try the following:-

select count(distinct a.col1, b.col2)
from table tab1 a
inner join tab2 b on b.joincol = a.joincol
where a.col3 = ‘some condition’
and b.col4 = ‘some other condition';

I get the following error:-

Error: ERROR 605 (42P00): Syntax error. Unknown function: DISTINCT_COUNT. 
(state=42P00,code=605)

Playing around with this it looks like count(distinct) works with a single 
column, but not more.

So I try this:-

SELECT count(*)
FROM
  (SELECT a.col1,
  b.col2
   FROM TABLE tab1 a
   INNER JOIN tab2 b ON b.joincol = a.joincol
   WHERE a.col3 = ‘SOME condition’
 AND b.col4 = ‘SOME other condition'
   GROUP BY a.col1,
b.col2) ;

I get the following error:-

Error: Complex nested queries not supported. (state=,code=0)

So, my question … is there any way to get what I’m looking for?

Regards,

Mike





Re: TO_DATE function playing up?

2015-06-22 Thread Michael McAllister
Thanks for the help Gabriel, I really appreciate it. That did the trick!

Regards

Mike

On Jun 22, 2015, at 10:38 AM, Gabriel Reid 
gabriel.r...@gmail.commailto:gabriel.r...@gmail.com wrote:

Hi Michael,

Thanks for the very detailed explanation of your scenario.

I believe the issue is in your date format format string (-MM-DD). 
According to the Joda Time docs[1], D is the format specifier for day of year 
(and not day of month). If you use the format string -MM-dd, things 
should work fine (similar to what you saw in your last example above).

- Gabriel


1. 
http://www.joda.org/joda-time/apidocs/org/joda/time/format/DateTimeFormat.html

On Mon, Jun 22, 2015 at 5:07 PM Michael McAllister 
mmcallis...@homeaway.commailto:mmcallis...@homeaway.com wrote:
Hi everyone

New user, first time post. I have a question about the TO_DATE function seeming 
to misbehave. I issue the following SQL statement:-

SELECT TO_DATE('2015-06-22','-MM-DD')
FROM system.catalog
LIMIT 1;

I get the following response:-

2015-01-21

Similarly, when I issue the following SQL statement:-

select TO_DATE('2015-05-12','-MM-DD')
FROM system.catalog
limit 1;

I get the following response:-

2015-01-11

My issue is not with the day being returned - I understand what’s going on 
there. My issue is with the MONTH that’s being returned.

Interestingly, things are OK if I issue the following SQL statement:-

SELECT TO_DATE('22 Jun 2015','d MMM ')
FROM system.catalog
limit 1;

I get the following response:-

2015-06-21

Screenshot is attached. Phoenix version is 4.2 on HDP 2.2.

Regards

Mike

1__homeaway_asthad011ssh_.png




1__homeaway_asthad011ssh_.png