Re: Extracting column values from Phoenix composite primary key

2016-08-29 Thread Anil
HI Michael and All ,

Did you get a chance to look into this ? Thanks.


Thanks.


On 26 August 2016 at 07:38, Anil  wrote:

> HI Michael,
>
> Following the table create and upsert query -
>
> CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT NULL, LABEL
> VARCHAR NOT NULL, DIRECTION VARCHAR(10) NOT NULL, REVERSETIME UNSIGNED_LONG
> NOT NULL, TARGET VARCHAR,"cf".ID VARCHAR, CONSTRAINT pk PRIMARY
> KEY(TYPE,SOURCE, LABEL, DIRECTION, REVERSETIME, TARGET)) COMPRESSION =
> 'SNAPPY';
>
> upsert into SAMPLE(TYPE, SOURCE, LABEL, DIRECTION, REVERSETIME, TARGET,
> ID) values('test', 'src', 'label', 'direction', 134424245, 'target', 'id');
> .
> Thanks
>
>
> On 25 August 2016 at 20:50, Michael McAllister 
> wrote:
>
>> Can you please provide the sample rowkey? It is blank in your email
>> below. Alternatively, an UPSERT VALUES statement I can use to insert a row
>> that I can work with myself.
>>
>>
>>
>> Michael McAllister
>>
>> Staff Data Warehouse Engineer | Decision Systems
>>
>> mmcallis...@homeaway.com | C: 512.423.7447 | skype: michael.mcallister.ha
>>  | webex: https://h.a/mikewebex
>>
>> This electronic communication (including any attachment) is
>> confidential.  If you are not an intended recipient of this communication,
>> please be advised that any disclosure, dissemination, distribution, copying
>> or other use of this communication or any attachment is strictly
>> prohibited.  If you have received this communication in error, please
>> notify the sender immediately by reply e-mail and promptly destroy all
>> electronic and printed copies of this communication and any attachment.
>>
>>
>>
>> *From: *Anil 
>> *Reply-To: *"user@phoenix.apache.org" 
>> *Date: *Thursday, August 25, 2016 at 10:08 AM
>> *To: *"user@phoenix.apache.org" 
>> *Subject: *Re: Extracting column values from Phoenix composite primary
>> key
>>
>>
>>
>> HI Michael,
>>
>>
>>
>> Table create statement :
>>
>>
>>
>> CREATE TABLE SAMPLE(TYPE VARCHAR NOT NULL, SOURCE VARCHAR NOT NULL, LABEL
>> VARCHAR NOT NULL, DIRECTION VARCHAR(10) NOT NULL, REVERSETIME UNSIGNED_LONG
>> NOT NULL, TARGET VARCHAR,"CF".ID VARCHAR,CONSTRAINT PK PRIMARY
>> KEY(TYPE,,SOURCE, LABEL, DIRECTION, REVERSETIME, TARGET)) COMPRESSION =
>> 'SNAPPY'
>>
>>
>>
>> No salt buckets defined.
>>
>>
>>
>> Smaple table row key -
>>
>>
>>
>> byte[] startRow = ByteUtil.concat(PVarchar.INSTANCE.toBytes("test"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> PVarchar.INSTANCE.toBytes("src"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> PVarchar.INSTANCE.toBytes("label"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> PVarchar.INSTANCE.toBytes("direction"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> PUnsignedLong.INSTANCE.toBytes(1235464603853L),
>>
>> PVarchar.INSTANCE.toBytes("target"));
>>
>>
>>
>> i am trying to extract TARGET column. Thanks.
>>
>>
>>
>> Regards,
>>
>> Anil
>>
>>
>>
>> On 25 August 2016 at 19:29, Michael McAllister 
>> wrote:
>>
>> Can you provide an example of one of the rowkeys, the values you are
>> expecting out of it, and the full table definition? Of importance in the
>> table definition will be whether you have salt buckets defined.
>>
>>
>>
>> Michael McAllister
>>
>> Staff Data Warehouse Engineer | Decision Systems
>>
>> mmcallis...@homeaway.com | C: 512.423.7447 | skype: michael.mcallister.ha
>>  | webex: https://h.a/mikewebex
>>
>> This electronic communication (including any attachment) is
>> confidential.  If you are not an intended recipient of this communication,
>> please be advised that any disclosure, dissemination, distribution, copying
>> or other use of this communication or any attachment is strictly
>> prohibited.  If you have received this communication in error, please
>> notify the sender immediately by reply e-mail and promptly destroy all
>> electronic and printed copies of this communication and any attachment.
>>
>>
>>
>> *From: *Anil 
>> *Reply-To: *"user@phoenix.apache.org" 
>> *Date: *Thursday, August 25, 2016 at 1:09 AM
>> *To: *"user@phoenix.apache.org" 
>> *Subject: *Re: Extracting column values from Phoenix composite primary
>> key
>>
>>
>>
>> HI Michael,
>>
>>
>>
>> Thank you for the response.
>>
>>
>>
>> Unfortunately, that is not working.
>>
>>
>>
>> Following is the snippet. one of the column is unsigned long and trying
>> to extract the last column (with value target in the below code).
>>
>> please correct me if i am doing wrong.
>>
>>
>>
>>byte SEPARATOR_BYTE = 0;
>>
>> byte[] SEPARATOR_BYTE_ARRAY = { 0 };
>>
>> byte[] startRow = ByteUtil.concat(
>>
>> PVarchar.INSTANCE.toBytes("test"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> PVarchar.INSTANCE.toBytes("src"),
>>
>> QueryConstants.SEPARATOR_BYTE_ARRAY,
>>
>> 

Re: Help w/ table that suddenly keeps timing out

2016-08-29 Thread Ted Yu
I searched for "Cannot get all table regions" in hbase repo - no hit.
Seems to be Phoenix error.

Anyway, the cause could be due to the 1 offline region for this table.
Can you retrieve the encoded region name and search for it in the master
log ?

Feel free to pastebin snippets of master / region server logs if needed
(with proper redaction).

See if the following shell command works:

  hbase> assign 'REGIONNAME'
  hbase> assign 'ENCODED_REGIONNAME'

Cheers

On Mon, Aug 29, 2016 at 9:41 AM, Riesland, Zack 
wrote:

> ​Our cluster recently had some issue related to network outages*.
>
> When all the dust settled, Hbase eventually "healed" itself, and almost
> everything is back to working well, with a couple of exceptions.
>
> In particular, we have one table where almost every (Phoenix) query times
> out - which was never the case before. It's very small compared to most of
> our other tables at around 400 million rows.
>
> I have tried with a raw JDBC connection in Java code as well as with Aqua
> Data Studio, both of which usually work fine.
>
> The specific failure is that after 15 minutes (the set timeout),  I get a
> one-line error that says: “Error 1102 (XCL02): Cannot get all table regions”
>
> When I look at the GUI tools (like http:// server>:16010/master-status#storeStats)
> it shows '1' under "offline regions" for that table (it has 33 total
> regions). Almost all the other tables show '0'.
>
> Can anyone help me troubleshoot this?
>
> Are there Phoenix tables I can clear out that may be confused?
>
> This isn’t an issue with the schema or skew or anything. The same table
> with the same data was lightning fast before these hbase issues.
>
> I know there is a CLI tool for fixing HBase issues. I'm wondering whether
> that "offline region" is the cause of these timeouts.
>
> If not, how I can I figure it out?
>
> Thanks!
>
>
>
> * FWIW, what happened was that DNS stopped working for a while, so HBase
> started referring to all the region servers by IP address, which somewhat
> worked, until the region servers restarted. Then they were hosed until a
> bit of manual intervention.
>
>
>


Help w/ table that suddenly keeps timing out

2016-08-29 Thread Riesland, Zack
​Our cluster recently had some issue related to network outages*.

When all the dust settled, Hbase eventually "healed" itself, and almost 
everything is back to working well, with a couple of exceptions.

In particular, we have one table where almost every (Phoenix) query times out - 
which was never the case before. It's very small compared to most of our other 
tables at around 400 million rows.

I have tried with a raw JDBC connection in Java code as well as with Aqua Data 
Studio, both of which usually work fine.

The specific failure is that after 15 minutes (the set timeout),  I get a 
one-line error that says: “Error 1102 (XCL02): Cannot get all table regions”

When I look at the GUI tools (like http://:16010/master-status#storeStats) it shows '1' under "offline regions" 
for that table (it has 33 total regions). Almost all the other tables show '0'.

Can anyone help me troubleshoot this?

Are there Phoenix tables I can clear out that may be confused?

This isn’t an issue with the schema or skew or anything. The same table with 
the same data was lightning fast before these hbase issues.

I know there is a CLI tool for fixing HBase issues. I'm wondering whether that 
"offline region" is the cause of these timeouts.

If not, how I can I figure it out?

Thanks!


* FWIW, what happened was that DNS stopped working for a while, so HBase 
started referring to all the region servers by IP address, which somewhat 
worked, until the region servers restarted. Then they were hosed until a bit of 
manual intervention.