Re: Phoenix Client Backwards Compatibility

2020-10-12 Thread Sukumar Maddineni
It seems its perfect scenario for Phoenix Query service wh
ere you don't have a thick client
which is dependent on the HBase version of the server side.

--
Sukumar

On Mon, Oct 12, 2020 at 8:03 PM Ken  wrote:

> I'd like to build an application that talks to multiple Hbase clusters
> using Phoenix
> I understand that those clusters might have different hbase versions and
> thus different phoenix-server versions
> However ideally on our application side we only need to install one
> phoenix-client.jar
> I was wondering if the phoenix-client is fully backwards compatible
>
> i.e Can I use the latest phoenix-client to talk to all older versions
> phoenix-server?
>
>
>

-- 




Re: Too many connections from / - max is 60

2020-06-09 Thread Sukumar Maddineni
Hi Anil,

I think if you create that missing HBase table (index table) with dummy
metadata(using have shell) and then Phoenix drop index and recreate index
should work.

Phoenix 4.7 indexing code might have issues related cross rpc calls between
RS which can cause zk connection leaks. I would recommend upgrading 4.14.3
of possible which has lot of indexing improvements related to consistency a
d also performance.

--
Sukumar

On Mon, Jun 8, 2020, 10:15 PM anil gupta  wrote:

> You were right from the beginning. It is a problem with phoenix secondary
> index!
> I tried 4LW zk commands after enabling them, they didnt really provide me
> much extra information.
> Then, i took heap and thread dump of a RS that was throwing a lot of max
> connection error. Most of the rpc were busy with:
> *RpcServer.FifoWFPBQ.default.handler=129,queue=9,port=16020*
> *org.apache.phoenix.hbase.index.exception.SingleIndexWriteFailureException
> @ 0x63d837320*
> *org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException @
> 0x41f9a0270*
>
> *Failed 499 actions: Table 'DE:TABLE_IDX_NEW' was not found, got:
> DE:TABLE_IDX.: 499 times, *
>
> After finding the above error, i queried SYSTEM.CATALOG table. DE:TABLE
> has two secondary index but has only 1 secondary index table in
> hbase(DE:TABLE_IDX). Other table DE:TABLE_IDX_NEW' is missing from hbase. I
> am not really sure how this would happen.
> DE:TABLE_IDX_NEW is listed with index_state='i' in catalog table. Can you
> tell me what does this mean?(incomplete?)
> Now, i am trying to delete the primary table to get rid of index and then
> we can recreate the primary table since this is a small table but i am
> unable to do so via phoenix. Can you please tell me how i can
> *delete this table?(restart of cluster? or doing an upsert in catalog?)*
>
> *Lat one, if table_type='u' then its a user defined table? if
> table_type='i' then its an index table? *
>
> *Thanks a lot for your help!*
>
> *~Anil *
>
> On Wed, Jun 3, 2020 at 8:14 AM Josh Elser  wrote:
>
>> The RegionServer hosting hbase:meta will certainly have "load" placed
>> onto it, commensurate to the size of your cluster and the number of
>> clients you're running. However, this shouldn't be increasing the amount
>> of connections to ZK from a RegionServer.
>>
>> The RegionServer hosting system.catalog would be unique WRT other
>> Phoenix-table Regions. I don't recall off of the top of my head if there
>> is anything specific in the RegionServer code that runs alongside
>> system.catalog (the MetaDataEndpoint protocol) that reaches out to
>> ZooKeeper.
>>
>> If you're using HDP 2.6.3, I wouldn't be surprised if you're running
>> into known and fixed issues where ZooKeeper connections are not cleaned
>> up. That's multiple-years old code.
>>
>> netstat and tcpdump isn't really going to tell you anything you don't
>> already. From a thread dump or a heap dump, you'll be able to see the
>> number of ZooKeeper connections from a RegionServer. The 4LW commands
>> from ZK will be able to tell you which clients (i.e. RegionServers) have
>> the most connections. These numbers should match (X connections from a
>> RS to a ZK, and X connections in the Java RS process). The focus would
>> need to be on what opens a new connection and what is not properly
>> closing that connection (in every case).
>>
>> On 6/3/20 4:57 AM, anil gupta wrote:
>> > Thanks for sharing insights. Moving hbase mailing list to cc.
>> > Sorry, forgot to mention that we are using Phoenix4.7(HDP 2.6.3). This
>> > cluster is mostly being queried via Phoenix apart from few pure NoSql
>> > cases that uses raw HBase api's.
>> >
>> > I looked further into zk logs and found that only 6/15 RS are running
>> > into max connection problems(no other ip/hosts of our client apps were
>> > found) constantly. One of those RS is getting 3-4x the connections
>> > errors as compared to others, this RS is hosting hbase:meta
>> > <
>> http://ip-10-74-10-228.us-west-2.compute.internal:16030/region.jsp?name=1588230740>,
>>
>> > regions of phoenix secondary indexes and region of Phoenix and HBase
>> > tables. I also looked into other 5 RS that are getting max connection
>> > errors, for me nothing really stands out since all of them are hosting
>> > regions of phoenix secondary indexes and region of Phoenix and HBase
>> tables.
>> >
>> > I also tried to run netstat and tcpdump on zk host to find out anomaly
>> > but couldn't find anything apart from above mentioned analysis. Also
>> ran
>> > hbck and it reported that things are fine. I am still unable to pin
>> > point exact problem(maybe something with phoenix secondary index?). Any
>> > other pointer to further debug the problem will be appreciated.
>> >
>> > Lastly, I constantly see following zk connection loss logs in above
>> > mentioned 6 RS:
>> > /2020-06-03 06:40:30,859 WARN
>> >
>>  
>> [RpcServer.FifoWFPBQ.default.handler=123,queue=3,port=16020-SendThread(ip-10-74-0-120.us-west-2.compute.internal:2181)]
>> 

Re: [ANNOUNCE] New VP Apache Phoenix

2020-04-17 Thread Sukumar Maddineni
Thanks, Josh for your guidance and Congrats Ankit...

--
Sukumar

On Fri, Apr 17, 2020 at 12:07 AM Mingliang Liu  wrote:

> Congratulations Ankit, and Thanks Josh!
>
> On Thu, Apr 16, 2020 at 10:52 PM Reid Chan  wrote:
>
>> Congratulation Ankit!
>>
>> --
>>
>> Best regards,
>> R.C
>>
>>
>>
>> 
>> From: Josh Elser 
>> Sent: 16 April 2020 23:14
>> To: d...@phoenix.apache.org; user@phoenix.apache.org
>> Subject: [ANNOUNCE] New VP Apache Phoenix
>>
>> I'm pleased to announce that the ASF board has just approved the
>> transition of VP Phoenix from myself to Ankit. As with all things, this
>> comes with the approval of the Phoenix PMC.
>>
>> The ASF defines the responsibilities of the VP to be largely oversight
>> and secretarial. That is, a VP should be watching to make sure that the
>> project is following all foundation-level obligations and writing the
>> quarterly project reports about Phoenix to summarize the happenings. Of
>> course, a VP can choose to use this title to help drive movement and
>> innovation in the community, as well.
>>
>> With this VP rotation, the PMC has also implicitly agreed to focus on a
>> more regular rotation schedule of the VP role. The current plan is to
>> revisit the VP role in another year.
>>
>> Please join me in congratulating Ankit on this new role and thank him
>> for volunteering.
>>
>> Thank you all for the opportunity to act as VP these last years.
>>
>> - Josh
>>
>
>
> --
> L
>


-- 




Re: Reverse engineer a phoneix table definition

2020-04-14 Thread Sukumar Maddineni
How about a simple idea of redirecting all DDL statements to SYSTEM.LOG by
default which will be useful for logging+auditing purposes and also for
recreating table if needed.

--
Sukumar

On Tue, Apr 14, 2020 at 8:49 AM Geoffrey Jacoby 
wrote:

> This is a frequent feature request we unfortunately haven't implemented
> yet -- see PHOENIX-4286 and PHOENIX-5054, one of which I filed and the
> other one Josh did. :-)
>
> I agree with Josh, I'd love to see an implementation of this if someone
> has bandwidth.
>
> Geoffrey Jacoby
>
> On Tue, Apr 14, 2020 at 8:01 AM Josh Elser  wrote:
>
>> Yeah, I don't have anything handy.
>>
>> I'll happily review and commit such a utility if you happen to write one
>> (even if flawed).
>>
>> On 4/12/20 1:31 AM, Simon Mottram wrote:
>> > Best I can offer is
>> >
>> >   "SELECT * FROM SYSTEM.CATALOG where table_name = '" + tableName + "'
>> > and table_schem = '"  +schemaName + "'"
>> >
>> > S
>> > 
>> > *From:* Mich Talebzadeh 
>> > *Sent:* Sunday, 12 April 2020 1:36 AM
>> > *To:* user 
>> > *Subject:* Reverse engineer a phoneix table definition
>> > Hi,
>> >
>> > I was wondering if anyone has a handy script to reverse engineer an
>> > existing table schema.
>> >
>> > I guess one can get the info from system.catalog table to start with.
>> > However, I was wondering if there is a shell script already or I have
>> to
>> > write my own.
>> >
>> > Thanks,
>> >
>> > Dr Mich Talebzadeh
>> >
>> > LinkedIn
>> > /
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw/
>> >
>> > http://talebzadehmich.wordpress.com
>> >
>> >
>> > *Disclaimer:* Use it at your own risk.Any and all responsibility for
>> any
>> > loss, damage or destruction of data or any other property which may
>> > arise from relying on this email's technical content is explicitly
>> > disclaimed. The author will in no case be liable for any monetary
>> > damages arising from such loss, damage or destruction.
>> >
>>
>

--