Re: Re: how to security phoenix

2016-12-05 Thread lk_phoenix
if I use phoenix JDBC driver ,how to pass user name to hbase?

for example : 
hbase(main):012:0> grant 'user01', 'CR'   
so,user01 can connect and read tables by JDBC driver,
but my client pc only have one user named : 201 

2016-12-06 

lk_phoenix 



发件人:Josh Elser 
发送时间:2016-12-05 23:41
主题:Re: how to security phoenix
收件人:"user"
抄送:

Yes, use the HBase-provided access control mechanisms. 

lk_phoenix wrote: 
> hi,all: 
> I want to know how to add access contorl to the table I create by phoenix . 
> I need to add the privilege through hbase? 
> 2016-12-05 
>  
> lk_phoenix 

Re: Global Indexes and impact on availability

2016-12-05 Thread Neelesh
Local indexes would indeed solve this problem, at the cost of some penalty
at read-time.  Unfortunately, our vendor distribution (HortonWorks) still
does not have all the bug fixes required for local indexes to work in a
production setting. They consider local indexes to be still in beta and are
explicit about not using local indexes yet.

I was interested in seeing if anyone in the community has experienced
similar issues around global indexes

On Mon, Dec 5, 2016 at 2:39 PM, James Taylor  wrote:

> Have you tried local indexes?
>
> On Mon, Dec 5, 2016 at 2:35 PM Neelesh  wrote:
>
>> Hello,
>>   When a region server is under stress (hotspotting, or large
>> replication, call queue sizes hitting the limit, other processes competing
>> with HBase etc), we experience latency spikes for all regions hosted by
>> that region server.  This is somewhat expected in the plain HBase world.
>>
>> However, with a phoenix global index, this service deterioration seems to
>> propagate to a lot more region servers, since the affected RS hosts some
>> index regions. The actual data regions are on another RS and latencies on
>> that RS spike because it cannot complete the index update calls quickly.
>> And that second RS now causes issues on yet another one and so on.
>>
>> We've seen this happen on our cluster, and how we deal with this is by
>> "fixing" the original RS - split regions/restart/move around regions,
>> depending on what the problem is.
>>
>> Has any one experienced this issue? It feels like antithetical behavior
>> for a distributed system. Cluster breaking down for the the very reasons
>> its supposed to protect against.
>>
>> Love to hear the thoughts of Phoenix community on this
>>
>


Re: Global Indexes and impact on availability

2016-12-05 Thread James Taylor
Have you tried local indexes?

On Mon, Dec 5, 2016 at 2:35 PM Neelesh  wrote:

> Hello,
>   When a region server is under stress (hotspotting, or large replication,
> call queue sizes hitting the limit, other processes competing with HBase
> etc), we experience latency spikes for all regions hosted by that region
> server.  This is somewhat expected in the plain HBase world.
>
> However, with a phoenix global index, this service deterioration seems to
> propagate to a lot more region servers, since the affected RS hosts some
> index regions. The actual data regions are on another RS and latencies on
> that RS spike because it cannot complete the index update calls quickly.
> And that second RS now causes issues on yet another one and so on.
>
> We've seen this happen on our cluster, and how we deal with this is by
> "fixing" the original RS - split regions/restart/move around regions,
> depending on what the problem is.
>
> Has any one experienced this issue? It feels like antithetical behavior
> for a distributed system. Cluster breaking down for the the very reasons
> its supposed to protect against.
>
> Love to hear the thoughts of Phoenix community on this
>


Global Indexes and impact on availability

2016-12-05 Thread Neelesh
Hello,
  When a region server is under stress (hotspotting, or large replication,
call queue sizes hitting the limit, other processes competing with HBase
etc), we experience latency spikes for all regions hosted by that region
server.  This is somewhat expected in the plain HBase world.

However, with a phoenix global index, this service deterioration seems to
propagate to a lot more region servers, since the affected RS hosts some
index regions. The actual data regions are on another RS and latencies on
that RS spike because it cannot complete the index update calls quickly.
And that second RS now causes issues on yet another one and so on.

We've seen this happen on our cluster, and how we deal with this is by
"fixing" the original RS - split regions/restart/move around regions,
depending on what the problem is.

Has any one experienced this issue? It feels like antithetical behavior for
a distributed system. Cluster breaking down for the the very reasons its
supposed to protect against.

Love to hear the thoughts of Phoenix community on this


Re: Memory leak

2016-12-05 Thread Jonathan Leech
Created PHOENIX-3518 and included the requested info.

> On Dec 5, 2016, at 11:06 AM, Samarth Jain  wrote:
> 
> Thanks for reporting this, Jonathan. Would you mind filing a JIRA preferably 
> with the object tree that you are seeing in the leak. Also, what version of 
> hbase and phoenix are you using?
> 
>> On Mon, Dec 5, 2016 at 9:53 AM Jonathan Leech  wrote:
>> Looks like PHOENIX-2357 introduced a memory leak, at least for me... I end 
>> up with old gen filled up with objects - 100,000,000 instances each of 
>> WeakReference and LinkedBlockingQueue$Node, owned by 
>> ConnectionQueryServicesImpl.connectionsQueue. The PhoenixConnection referred 
>> to by the  WeakReference is null for all but the few active connections. I 
>> don't see anything in the log - I can't confirm that it's logging properly 
>> though. Per the docs for ScheduledThreadPoolExecutor, if the run method 
>> throws an error, further executions are suppressed.
>> 
>> Thanks,
>> Jonathan


Re: Memory leak

2016-12-05 Thread Samarth Jain
Thanks for reporting this, Jonathan. Would you mind filing a JIRA
preferably with the object tree that you are seeing in the leak. Also, what
version of hbase and phoenix are you using?

On Mon, Dec 5, 2016 at 9:53 AM Jonathan Leech  wrote:

> Looks like PHOENIX-2357 introduced a memory leak, at least for me... I end
> up with old gen filled up with objects - 100,000,000 instances each of
> WeakReference and LinkedBlockingQueue$Node, owned by
> ConnectionQueryServicesImpl.connectionsQueue. The PhoenixConnection
> referred to by the  WeakReference is null for all but the few active
> connections. I don't see anything in the log - I can't confirm that it's
> logging properly though. Per the docs for ScheduledThreadPoolExecutor, if
> the run method throws an error, further executions are suppressed.
>
> Thanks,
> Jonathan
>


Memory leak

2016-12-05 Thread Jonathan Leech
Looks like PHOENIX-2357 introduced a memory leak, at least for me... I end up 
with old gen filled up with objects - 100,000,000 instances each of 
WeakReference and LinkedBlockingQueue$Node, owned by 
ConnectionQueryServicesImpl.connectionsQueue. The PhoenixConnection referred to 
by the  WeakReference is null for all but the few active connections. I don't 
see anything in the log - I can't confirm that it's logging properly though. 
Per the docs for ScheduledThreadPoolExecutor, if the run method throws an 
error, further executions are suppressed.

Thanks,
Jonathan


Re: how to security phoenix

2016-12-05 Thread Josh Elser

Yes, use the HBase-provided access control mechanisms.

lk_phoenix wrote:

hi,all:
I want to know how to add access contorl to the table I create by phoenix .
I need to add the privilege through hbase?
2016-12-05

lk_phoenix


Re: ClassCastException: org.joda.time.DateTime

2016-12-05 Thread lk_phoenix
I know what's wrong. I need use phoenix-4.9.0-HBase-1.2-pig.jar not 
phoenix-4.9.0-HBase-1.2-client.jar.

2016-12-05 

lk_phoenix 



发件人:"lk_phoenix"
发送时间:2016-12-05 13:41
主题:ClassCastException: org.joda.time.DateTime
收件人:"user.phoenix"
抄送:

hi,all: 
I have a test under centos7.2 JDK1.8.0_112 apache-phoenix-4.9.0-HBase-1.2 
pig-0.16.0. I still got same error : 
Error: java.io.IOException: java.lang.ClassCastException: 
org.joda.time.DateTime cannot be cast to 
org.apache.phoenix.shaded.org.joda.time.DateTime at 
org.apache.pig.backend.hadoop.executionengine.physicalLayer.relationalOperators.StoreFuncDecorator.putNext(StoreFuncDecorator.java:83)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:144)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigOutputFormat$PigRecordWriter.write(PigOutputFormat.java:97)
 at 
org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
 at 
org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
 at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigMapOnly$Map.collect(PigMapOnly.java:48)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.runPipeline(PigGenericMapBase.java:282)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:275)
 at 
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.map(PigGenericMapBase.java:65)
 at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at 
org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at 
org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at 
org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
 at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) Caused by: 
java.lang.ClassCastException: org.joda.time.DateTime cannot be cast to 
org.apache.phoenix.shaded.org.joda.time.DateTime at 
org.apache.phoenix.pig.util.TypeUtil.castPigTypeToPhoenix(TypeUtil.java:199) at 
org.apache.phoenix.pig.PhoenixHBaseStorage.putNext(PhoenixHBaseStorage.java:189)
 at
my sourcedata :
AA,CC,3.5,EE,GG,2015-08-30
BB,DD,3.5,FF,HH,2015-08-05
pig script:
AA = load '/sourcedata/farm-prod/complete3.txt' USING PigStorage(',') AS(
name:chararray,
address:chararray,
price:double,
unit:chararray,
info_source:chararray,
date:datetime
);
dump AA;
out put :
(AA,CC,3.5,EE,GG,2015-08-30T00:00:00.000-04:00)
(BB,DD,3.5,FF,HH,2015-08-05T00:00:00.000-04:00)
STORE AA into 'hbase://FARM_PRODUCT_PRICE' using 
org.apache.phoenix.pig.PhoenixHBaseStorage('dev7,dev8,dev9','-batchSize 5000');
my phoenix table :
create table FARM_PRODUCT_PRICE
(
name varchar(30),
address varchar(80),
price double,
unit varchar(20),
info_source varchar(80),
date date not null CONSTRAINT pk PRIMARY KEY (name,address,date)
)VERSIONS=1,SALT_BUCKETS=3,COMPRESSION='snappy';

2016-12-05


lk_phoenix 

Re: Re: Apache Spark Plugin dosn't support spark2.0

2016-12-05 Thread lk_phoenix
thanks a lot.

2016-12-05 

lk_phoenix 



发件人:Dequn Zhang 
发送时间:2016-12-05 15:53
主题:Re: Apache Spark Plugin dosn't support spark2.0
收件人:"user"
抄送:

Spark changed DataFrame definition from 2.x , it’s not compatible with 1.x and 
Phoenix Spark Interpreter was developed with Spark 1.x API, so , until now, U 
can use Phoenix JDBC instead in Spark 2.0. 


You can loot at this JIRA.


https://issues.apache.org/jira/browse/PHOENIX-


On 5 December 2016 at 15:45:05, lk_phoenix (lk_phoe...@163.com) wrote:
hi,all:
   when I try spark plugin with phoenix 4.9,I got error:

java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame 

2016-12-05


lk_phoenix