Re: HBase checkAndPut Support

2016-07-19 Thread Josh Elser

Did you read James' response in PHOENIX-2271? [1]

Restating for you: as a work-around, you could try to use the recent 
transaction support which was added via Apache Tephra to prevent 
multiple clients from modifying a cell. This would be much less 
efficient than the "native" checkAndPut API call from HBase; however, I 
cannot think of any other solution which exists in a released version of 
Apache Phoenix.


[1] 
https://issues.apache.org/jira/browse/PHOENIX-2271?focusedCommentId=14877306=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14877306


shalandra.sha...@barclays.com wrote:

Hi team,

Any pointers/help?

Thanks

Shal

*From:*Sharma, Shalandra: IT (NYK)
*Sent:* Monday, May 23, 2016 11:25
*To:* 'user@phoenix.apache.org'
*Subject:* HBase checkAndPut Support

Hi Team,

We are evaluating use of Phoenix in our project. We are using CDH 5.5.2.
One of our use cases is to update the HBase row only if the value in a
column (version) is greater than or equal to the current value. Using
HBase core APIs we can achieve this by checkAndPut but I could not find
any similar mechanism via Phoenix.

I have followed the discussion in the following JIRAs and it looks like
this is something which will be available only in a future release. Can
you please advise if there is a work around available otherwise it will
be a blocker for us.

https://issues.apache.org/jira/browse/PHOENIX-2271

https://issues.apache.org/jira/browse/PHOENIX-2275

https://issues.apache.org/jira/browse/PHOENIX-2199

Thanks

*Shalandra Sharma*

___

This message is for information purposes only, it is not a
recommendation, advice, offer or solicitation to buy or sell a product
or service nor an official confirmation of any transaction. It is
directed at persons who are professionals and is not intended for retail
customer use. Intended for recipient only. This message is subject to
the terms at: www.barclays.com/emaildisclaimer
.

For important disclosures, please see:
www.barclays.com/salesandtradingdisclaimer
 regarding market
commentary from Barclays Sales and/or Trading, who are active market
participants; and in respect of Barclays Research, including disclosures
relating to specific issuers, please see http://publicresearch.barclays.com.

___



Re: phoenix.query.maxServerCacheBytes not used

2016-07-19 Thread Mujtaba Chohan
phoenix.query.maxServerCacheBytes is a client side parameter. If you are
using bin/sqlline.py then set this property in bin/hbase-site.xml and
restart sqlline.

- mujtaba

On Tue, Jul 19, 2016 at 1:59 PM, Nathan Davis 
wrote:

> Hi,
> I am running a standalone HBase locally with Phoenex installed by dropping
> the jars into HBase lib directory. I have added the following to my
> hbase-site.xml and restarted HBase:
>
>   
>> phoenix.query.maxServerCacheBytes
>> 419430400
>>   
>>   
>> phoenix.query.maxGlobalMemoryPercentage
>> 25
>>   
>
>
> However, I am still getting the following error when doing a regular inner
> join to an 5mill-sized RHS table (Notice that the error says "...maximum
> allowed size (104857600 bytes)" even though I have changed that setting to
> 400MB):
>
> java.sql.SQLException: Encountered exception in sub plan [0] execution.
>> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:193)
>> at org.apache.phoenix.execute.HashJoinPlan.iterator(HashJoinPlan.java:138)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:276)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement$1.call(PhoenixStatement.java:261)
>> at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.executeQuery(PhoenixStatement.java:260)
>> at
>> org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:248)
>> at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:172)
>> at
>> org.apache.phoenix.jdbc.PhoenixPreparedStatement.execute(PhoenixPreparedStatement.java:177)
>> at
>> org.apache.phoenix.jdbc.PhoenixConnection.executeStatements(PhoenixConnection.java:354)
>> at
>> org.apache.phoenix.util.PhoenixRuntime.executeStatements(PhoenixRuntime.java:298)
>> at org.apache.phoenix.util.PhoenixRuntime.main(PhoenixRuntime.java:243)
>> Caused by: org.apache.phoenix.join.MaxServerCacheSizeExceededException:
>> Size of hash cache (104857638 bytes) exceeds the maximum allowed size
>> (104857600 bytes)
>> at
>> org.apache.phoenix.join.HashCacheClient.serialize(HashCacheClient.java:110)
>> at
>> org.apache.phoenix.join.HashCacheClient.addHashCache(HashCacheClient.java:83)
>> at
>> org.apache.phoenix.execute.HashJoinPlan$HashSubPlan.execute(HashJoinPlan.java:381)
>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:162)
>> at org.apache.phoenix.execute.HashJoinPlan$1.call(HashJoinPlan.java:158)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>> at
>> org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>
>
> It seems like my `maxServerCacheBytes` setting is not getting picked up,
> but not sure why. I'm pretty newb to Phoenix so I'm sure it's something
> simple...
>
> Thanks up front for the help!
>
> -Nathan Davis
>


Re: Cache of region boundaries are out of date

2016-07-19 Thread ashu99
If your application allows it, you should restart your HBase to recompute
the stats.

Alicia

On Mon, Jul 18, 2016 at 1:11 PM, Michal Medvecky  wrote:

> Hello,
>
> I have ~1B table and trying to select some columns. I get this exception:
>
> java.lang.RuntimeException:
> org.apache.phoenix.schema.StaleRegionBoundaryCacheException: ERROR 1108
> (XCL08): Cache of region boundaries are out of date.
> at sqlline.IncrementalRows.hasNext(IncrementalRows.java:73)
> at sqlline.TableOutputFormat.print(TableOutputFormat.java:33)
> at sqlline.SqlLine.print(SqlLine.java:1653)
> at sqlline.Commands.execute(Commands.java:833)
> at sqlline.Commands.sql(Commands.java:732)
> at sqlline.SqlLine.dispatch(SqlLine.java:808)
> at sqlline.SqlLine.begin(SqlLine.java:681)
> at sqlline.SqlLine.start(SqlLine.java:398)
> at sqlline.SqlLine.main(SqlLine.java:292)
>
> I tried DELETE FROM SYSTEM.STATS WHERE PHYSICAL_NAME='MEDIA';
>
> but did not help.
>
> Any suggestions?
>
> I'm using 4.6.0 on HBase 1.1.
>
> Thank you,
>
> Michal
>


HBase prefix scan.

2016-07-19 Thread ankit beohar
Hi All,

How can I achieve HBase Prefix Filter in Phoenix query.
My rowkey is 9898989898_@#$_ABC I want all records of 9898989898_@#$ which
is achieve in Hbase through prefix filter I need same in Phoenix  query.

Best Regards,
ANKIT BEOHAR