Re: Query on phoenix upgrade to 5.1.0

2020-01-30 Thread Aleksandr Saraseka
 can't find tarbal on https://phoenix.apache.org/download.html page.

On Wed, Jan 29, 2020 at 6:55 PM Josh Elser  wrote:

> Aleksandr and Prathap,
>
> Upgrades are done in Phoenix as they always have been. You should deploy
> the new version of phoenix-server jars to HBase, and then the first time
> a client connects with the Phoenix JDBC driver, that client will trigger
> an update to any system tables schema.
>
> As such, you need to make sure that this client has permission to alter
> the phoenix system tables that exist, often requiring admin-level access
> to hbase. Your first step should be collecting DEBUG log from your
> Phoenix JDBC client on upgrade.
>
> Please also remember that 5.0.0 is pretty old at this point -- we're
> overdue for a 5.1.0. There may be existing issues that have already been
> fixed around the upgrade. Doing a search on Jira if you've not done so
> already is important.
>
> On 1/29/20 4:30 AM, Aleksandr Saraseka wrote:
> > Hello.
> > I'm second on this.
> > We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things
> > like hdfs, hbase) and have the same problem.
> >
> > We are using queryserver + thin-client
> > So on PQS side we have:
> > 2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil:
> > Upgrading metadata to add parent links for indexes on views
> > 2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil:
> > Upgrading metadata to add parent to child links for views
> > 2020-01-29 09:24:21,628 INFO
> > org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master
> > protocol: MasterService
> > 2020-01-29 09:24:21,631 INFO
> > org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down
> > QueryLoggerDisruptor..
> >
> > On client side:
> > java.lang.RuntimeException:
> > org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03):
> > Table undefined. tableName=SYSTEM.CHILD_LINK
> >
> > Can you point me to upgrade guide for Phoenix ? I tried to find it by
> > myself and have no luck.
> >
> > On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran  > <mailto:prathap...@gmail.com>> wrote:
> >
> > Hi All,
> >
> > Thanks for the quick update. Still we have some clarification about
> > the context.
> >
> > Actually we are upgrading from the below version
> > Source  : apache-phoenix-4.14.0-cdh5.14.2
> > Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
> > <
> http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
> >
> >
> > Just FYI, we have already upgraded to Hbase  2.0.
> >
> > Still we are facing the issue below, Once we create this table
> > manually, then there is no issues to run DML operations.
> >> org.apache.hadoop.hbase.TableNotFoundException:
> > SYSTEM.CHILD_LINK
> >
> > Please let me know if any steps/documents for phoenix upgrade from
> > 4.14 to 5.0.
> >
> > Thanks,
> > Prathap
> >
> >
> > On Tue, Jan 14, 2020 at 11:34 PM Josh Elser  > <mailto:els...@apache.org>> wrote:
> >
> > (with VP-Phoenix hat on)
> >
> > This is not an official Apache Phoenix release, nor does it
> > follow the
> > ASF trademarks/branding rules. I'll be following up with the
> > author to
> > address the trademark violations.
> >
> > Please direct your questions to the author of this project.
> > Again, it is
> > *not* Apache Phoenix.
> >
> > On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
> >  > Phoenix 5.1 doesn't actually exist yet, at least not at the
> > Apache
> >  > level. We haven't released it yet. It's possible that a
> > vendor or user
> >  > has cut an unofficial release off one of our
> > development branches, but
> >  > that's not something we can give support on. You should
> > contact your
> >  > vendor.
> >  >
> >  > Also, since I see you're upgrading from Phoenix 4.14 to 5.1:
> > The 4.x
> >  > branch of Phoenix is for HBase 1.x systems, and the 5.x
> > branch is for
> >  > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x,
> > make sure
> >  > that you also upgrade your HBase. If you're still on HBase
> > 1.x, we
> >  > recently release

Re: Query on phoenix upgrade to 5.1.0

2020-01-29 Thread Aleksandr Saraseka
mplementation.java:860)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
>> >  at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
>> >  at
>> >
>>  org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
>> >  at
>> >
>>  
>> org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
>> >  at
>> >
>>  
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
>> >  at
>> >
>>  
>> org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
>> >  at
>> >
>>  
>> org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
>> >  at
>> >
>>  org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
>> >  at
>> >
>>  
>> org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
>> > _*
>> >
>> > Thanks,
>> > Prathap
>> >
>>
>

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Phoenix non-Kerberos security ?

2019-11-04 Thread Aleksandr Saraseka
It's working fine with Kerberos, but we use streaming Spark jobs on Google
Dataproc cluster and seems it has some problems to make Spark -> Phoenix
JDBC -> HBase working, so I'm trying to find some workaround to keep HBase
unsecure and have "protection from mistake" for PQS that users use.

On Mon, Nov 4, 2019 at 11:02 AM anil gupta  wrote:

> To the best of my knowledge Phoenix/HBase only supports Kerberos.
> In past, i have used secure HBase/Phoenix cluster in web services and it
> worked fine. Kerberos can be integrated with AD. But, you might need to
> check whether Queryserver supports security or not. In worst case, a
> potential workaround would be to put Phoenix query server behind a
> homegrown webservice that authenticates and authorizes the users before
> forwarding the request to Queryserver.
>
> HTH,
> Anil Gupta
>
> On Mon, Nov 4, 2019 at 12:45 AM Aleksandr Saraseka <
> asaras...@eztexting.com> wrote:
>
>> Hello community.
>> Does Phoenix have some kind of security for authentication and
>> authorization other then Kerberos ?
>> We're allowing our users connect to our cluster with QueryServer, but at
>> the same time we want to authenticate them and control what kind of access
>> they can have (read only, write only to some tables) without enabling
>> Kerberos for HBase/HDFS clusters.
>>
>> --
>> Aleksandr Saraseka
>> DBA
>> 380997600401
>>  *•*  asaras...@eztexting.com  *•*  eztexting.com
>> <http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> <http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Phoenix non-Kerberos security ?

2019-11-04 Thread Aleksandr Saraseka
Hello community.
Does Phoenix have some kind of security for authentication and
authorization other then Kerberos ?
We're allowing our users connect to our cluster with QueryServer, but at
the same time we want to authenticate them and control what kind of access
they can have (read only, write only to some tables) without enabling
Kerberos for HBase/HDFS clusters.

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Index on SYSTEM.LOG failed

2019-10-25 Thread Aleksandr Saraseka
Hello.
Indexes on other tables created without any problems.
0: jdbc:phoenix:thin:url=http://localhost:876> create local index
alex_test_data_idx on alex.test (data);
No rows affected (10.93 seconds)
0: jdbc:phoenix:thin:url=http://localhost:876>

On Thu, Oct 24, 2019 at 8:18 PM Josh Elser  wrote:

> Do you have a mismatch of Phoenix thinclient jars and Phoenix
> QueryServer versions?
>
> You're getting a classpath-type error, not some Phoenix internal error.
>
> On 10/24/19 10:01 AM, Aleksandr Saraseka wrote:
> > Hello. We're logging queries in Phoenix.
> > Main criteria can be a start_time (to investigate possible performance
> > problems in some particular time).
> > Execution plan for the query shows full scan - that could cause problems
> > with a lot of data^
> > explain select query from system.LOG order by start_time;
> >
> ++-++--+
> > |PLAN|
> > EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
> >
> ++-++--+
> > | CLIENT 32-CHUNK PARALLEL 32-WAY FULL SCAN OVER SYSTEM:LOG  | null
> > | null   | null |
> > | CLIENT MERGE SORT  | null
> > | null   | null |
> >
> ++-++--+
> > 2 rows selected
> >
> > So I'm trying to create local index on start_time field, but getting an
> > exception. Is this "by design" that you can not create index on SYSTEM
> > tables or I need to do this in some another way ?
> > CREATE LOCAL INDEX "system_log_start_time_idx" ON SYSTEM.LOG
> ("START_TIME");
> > Error: Error -1 (0) : Error while executing SQL "CREATE LOCAL INDEX
> > "system_log_start_time_idx" ON SYSTEM.LOG ("START_TIME")": Remote driver
> > error: IndexOutOfBoundsException: Index: 0 (state=0,code=-1)
> > org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : Error
> > while executing SQL "CREATE LOCAL INDEX "system_log_start_time_idx" ON
> > SYSTEM.LOG ("START_TIME")": Remote driver error:
> > IndexOutOfBoundsException: Index: 0
> >  at
> > org.apache.phoenix.shaded.org
> .apache.calcite.avatica.Helper.createException(Helper.java:54)
> >  at
> > org.apache.phoenix.shaded.org
> .apache.calcite.avatica.Helper.createException(Helper.java:41)
> >  at
> > org.apache.phoenix.shaded.org
> .apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:163)
> >  at
> > org.apache.phoenix.shaded.org
> .apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
> >  at sqlline.Commands.execute(Commands.java:822)
> >  at sqlline.Commands.sql(Commands.java:732)
> >  at sqlline.SqlLine.dispatch(SqlLine.java:813)
> >  at sqlline.SqlLine.begin(SqlLine.java:686)
> >  at sqlline.SqlLine.start(SqlLine.java:398)
> >  at sqlline.SqlLine.main(SqlLine.java:291)
> >  at
> >
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
> > java.lang.IllegalAccessError:
> >
> org/apache/phoenix/shaded/org/apache/calcite/avatica/AvaticaSqlException$PrintStreamOrWriter
> >  at
> >
> org.apache.calcite.avatica.AvaticaSqlException.printStackTrace(AvaticaSqlException.java:75)
> >  at java.lang.Throwable.printStackTrace(Throwable.java:634)
> >  at sqlline.SqlLine.handleSQLException(SqlLine.java:1540)
> >  at sqlline.SqlLine.handleException(SqlLine.java:1505)
> >  at sqlline.SqlLine.error(SqlLine.java:905)
> >  at sqlline.Commands.execute(Commands.java:860)
> >  at sqlline.Commands.sql(Commands.java:732)
> >  at sqlline.SqlLine.dispatch(SqlLine.java:813)
> >  at sqlline.SqlLine.begin(SqlLine.java:686)
> >  at sqlline.SqlLine.start(SqlLine.java:398)
> >  at sqlline.SqlLine.main(SqlLine.java:291)
> >  at
> >
> org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
> >
> >
> > --
> >   Aleksandr Saraseka
> > DBA
> > 380997600401
> >  *•* asaras...@eztexting.com
> > <mailto:asaras...@eztexting.com> *•* eztexting.com
> > <
> http://eztexting

Index on SYSTEM.LOG failed

2019-10-24 Thread Aleksandr Saraseka
Hello. We're logging queries in Phoenix.
Main criteria can be a start_time (to investigate possible performance
problems in some particular time).
Execution plan for the query shows full scan - that could cause problems
with a lot of data^
explain select query from system.LOG order by start_time;
++-++--+
|PLAN|
EST_BYTES_READ  | EST_ROWS_READ  | EST_INFO_TS  |
++-++--+
| CLIENT 32-CHUNK PARALLEL 32-WAY FULL SCAN OVER SYSTEM:LOG  | null
   | null   | null |
| CLIENT MERGE SORT  | null
   | null   | null |
++-++--+
2 rows selected

So I'm trying to create local index on start_time field, but getting an
exception. Is this "by design" that you can not create index on SYSTEM
tables or I need to do this in some another way ?
CREATE LOCAL INDEX "system_log_start_time_idx" ON SYSTEM.LOG ("START_TIME");
Error: Error -1 (0) : Error while executing SQL "CREATE LOCAL INDEX
"system_log_start_time_idx" ON SYSTEM.LOG ("START_TIME")": Remote driver
error: IndexOutOfBoundsException: Index: 0 (state=0,code=-1)
org.apache.calcite.avatica.AvaticaSqlException: Error -1 (0) : Error
while executing SQL "CREATE LOCAL INDEX "system_log_start_time_idx" ON
SYSTEM.LOG ("START_TIME")": Remote driver error: IndexOutOfBoundsException:
Index: 0
at
org.apache.phoenix.shaded.org.apache.calcite.avatica.Helper.createException(Helper.java:54)
at
org.apache.phoenix.shaded.org.apache.calcite.avatica.Helper.createException(Helper.java:41)
at
org.apache.phoenix.shaded.org.apache.calcite.avatica.AvaticaStatement.executeInternal(AvaticaStatement.java:163)
at
org.apache.phoenix.shaded.org.apache.calcite.avatica.AvaticaStatement.execute(AvaticaStatement.java:217)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
at
org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)
java.lang.IllegalAccessError:
org/apache/phoenix/shaded/org/apache/calcite/avatica/AvaticaSqlException$PrintStreamOrWriter
at
org.apache.calcite.avatica.AvaticaSqlException.printStackTrace(AvaticaSqlException.java:75)
at java.lang.Throwable.printStackTrace(Throwable.java:634)
at sqlline.SqlLine.handleSQLException(SqlLine.java:1540)
at sqlline.SqlLine.handleException(SqlLine.java:1505)
at sqlline.SqlLine.error(SqlLine.java:905)
at sqlline.Commands.execute(Commands.java:860)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
at
org.apache.phoenix.queryserver.client.SqllineWrapper.main(SqllineWrapper.java:93)


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Phoenix index REBUILD stuck in BUILDING state

2019-10-16 Thread Aleksandr Saraseka
Hello Niraj.
You can use ASYNC option and then rebuild index with IndexTool (map-reduce
under the hood)
https://issues.apache.org/jira/browse/PHOENIX-2890


Re: PSQ processlist

2019-09-10 Thread Aleksandr Saraseka
Thank you Josh, this is very helpful.
Another question - can we kill long running query in PQS somehow ?

On Mon, Sep 9, 2019 at 5:09 PM Josh Elser  wrote:

> Not unique to PQS, see:
>
> https://issues.apache.org/jira/browse/PHOENIX-2715
>
> On 9/9/19 9:02 AM, Aleksandr Saraseka wrote:
> > Hello.
> > Does Phoenix Query Server have any possibility to track running queries
> > ? Like user connects with thin client and run some long running query,
> > can I understand who and what is running ?
> >
> > --
> >   Aleksandr Saraseka
> > DBA
> > 380997600401
> >  *•* asaras...@eztexting.com
> > <mailto:asaras...@eztexting.com> *•* eztexting.com
> > <
> http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> >
> >
> > <
> http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>
> > <
> https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature
> >
> >
>


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


PSQ processlist

2019-09-09 Thread Aleksandr Saraseka
Hello.
Does Phoenix Query Server have any possibility to track running queries ?
Like user connects with thin client and run some long running query, can I
understand who and what is running ?

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Is there any way to using appropriate index automatically?

2019-08-19 Thread Aleksandr Saraseka
We have no problems with that. I mean indexes are used even without hints,
if they're suitable for a query.
Maybe you can share your Phoenix version, query, index definition and exec
plan ?

On Mon, Aug 19, 2019 at 12:46 PM you Zhuang 
wrote:

> Yeah, I mean no hint , use appropriate index automatically. I create a
> local index  and a query with corresponding index column filter in where
> clause. But the query doesn’t use index, with index hint it uses it.



-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Phoenix Index Scrutiny Tool

2019-08-13 Thread Aleksandr Saraseka
upd: yeah, my first example of output seems to be wrong.
Re-run my test and now seems right:
+--+---+++
| SOURCE_TABLE
|   TARGET_TABLE|
SCRUTINY_EXECUTE_TIME  |SOURCE_ROW_PK_H |
+--+---+++
| "ALEX"."TEST"
   | "ALEX"."GLOBAL_DATA1_DATA2_INC_DATA3" |
1565689347373  | 3e3aa2997d0bbf9b4d7247 |

On Tue, Aug 13, 2019 at 12:43 PM Aleksandr Saraseka 
wrote:

> @Vincent - I dropped columns from an index table.
> Thank you for pointing me to "partial rebuild", will try it.
>
> On Mon, Aug 12, 2019 at 8:51 PM Vincent Poon 
> wrote:
>
>> @Aleksandr did you delete rows from the data table, or the index table?
>> The output you're showing says that you have orphaned rows in the index
>> table - i.e. rows that exist only in the index table and have no
>> corresponding row in the data table.  If you deleted the original rows in
>> the data table without deleting the corresponding rows in the index table,
>> and if major compaction has happened (perhaps what you meant by
>> "hard-delete" ?), then in general there's no way to rebuild the index
>> correctly, as there's no source data to work off of.   You might have
>> special cases where you have an index that covers all the data table rows
>> such that you could in theory go backwards, but I don't believe we have any
>> tool to do that yet.
>>
>> The IndexTool does have a "partial rebuild" option that works in
>> conjunction with "ALTER INDEX REBUILD ASYNC" - see PHOENIX-2890.  However
>> this is not well documented, and I haven't personally tried it myself.
>>
>> On Fri, Aug 9, 2019 at 2:18 PM Alexander Batyrshin <0x62...@gmail.com>
>> wrote:
>>
>>> I have familiar question - how to partially rebuild indexes by
>>> timestamps interval like many MapReduce has —starttime/—endttime
>>>
>>> On 9 Aug 2019, at 17:21, Aleksandr Saraseka 
>>> wrote:
>>>
>>> Hello community!
>>> I'm testing scrutiny tool to check index consistency.
>>> I hard-deleted from HBase a couple of rows from global index, then ran
>>> Scrutiny tool, it showed me some output like:
>>>
>>>
>>> SOURCE_TABLE
>>>
>>> TARGET_TABLE
>>>
>>> SCRUNITY_EXECUTE_TIME
>>>
>>> SOURCE_ROW_PK_HASH
>>>
>>> SOURCE_TS
>>>
>>> TARGET_TS
>>>
>>> HAS_TARGET_ROW
>>>
>>> INDEX_TABLE
>>>
>>> DATA_TABLE
>>>
>>> 1565358267566
>>>
>>> 8a74d1f8286a7ec7ce99b22ee0723ab1
>>>
>>> 1565358171998
>>>
>>> -1
>>>
>>> false
>>>
>>> INDEX_TABLE
>>>
>>> DATA_TABLE
>>>
>>> 1565358267566
>>>
>>> a2cfe11952f3701d340069f80e2a82b7
>>>
>>> 1565358135292
>>>
>>> -1
>>>
>>> false
>>>
>>> so, let's imagine that I want to repair my index and don't want to run
>>> full rebuild (huge table).
>>>
>>> What's the best option ?
>>>
>>> Two things came to my mind:
>>>
>>> - Find a row in data table, and upset necessary data to index table.
>>>
>>> - Find a row in data table,  export it then drop it, and then insert it
>>> again.
>>>
>>> And the main question - how can I get a value from data or index table
>>> by Primary Key hash ?
>>>
>>>
>>> --
>>> Aleksandr Saraseka
>>> DBA
>>> 380997600401
>>>  *•*  asaras...@eztexting.com  *•*  eztexting.com
>>> <http://eztexting.com/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>>
>>> <http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>> <http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>> <https://www.youtube.com/eztexting?utm_source=WiseStamp

Re: Phoenix Index Scrutiny Tool

2019-08-13 Thread Aleksandr Saraseka
@Vincent - I dropped columns from an index table.
Thank you for pointing me to "partial rebuild", will try it.

On Mon, Aug 12, 2019 at 8:51 PM Vincent Poon  wrote:

> @Aleksandr did you delete rows from the data table, or the index table?
> The output you're showing says that you have orphaned rows in the index
> table - i.e. rows that exist only in the index table and have no
> corresponding row in the data table.  If you deleted the original rows in
> the data table without deleting the corresponding rows in the index table,
> and if major compaction has happened (perhaps what you meant by
> "hard-delete" ?), then in general there's no way to rebuild the index
> correctly, as there's no source data to work off of.   You might have
> special cases where you have an index that covers all the data table rows
> such that you could in theory go backwards, but I don't believe we have any
> tool to do that yet.
>
> The IndexTool does have a "partial rebuild" option that works in
> conjunction with "ALTER INDEX REBUILD ASYNC" - see PHOENIX-2890.  However
> this is not well documented, and I haven't personally tried it myself.
>
> On Fri, Aug 9, 2019 at 2:18 PM Alexander Batyrshin <0x62...@gmail.com>
> wrote:
>
>> I have familiar question - how to partially rebuild indexes by
>> timestamps interval like many MapReduce has —starttime/—endttime
>>
>> On 9 Aug 2019, at 17:21, Aleksandr Saraseka 
>> wrote:
>>
>> Hello community!
>> I'm testing scrutiny tool to check index consistency.
>> I hard-deleted from HBase a couple of rows from global index, then ran
>> Scrutiny tool, it showed me some output like:
>>
>>
>> SOURCE_TABLE
>>
>> TARGET_TABLE
>>
>> SCRUNITY_EXECUTE_TIME
>>
>> SOURCE_ROW_PK_HASH
>>
>> SOURCE_TS
>>
>> TARGET_TS
>>
>> HAS_TARGET_ROW
>>
>> INDEX_TABLE
>>
>> DATA_TABLE
>>
>> 1565358267566
>>
>> 8a74d1f8286a7ec7ce99b22ee0723ab1
>>
>> 1565358171998
>>
>> -1
>>
>> false
>>
>> INDEX_TABLE
>>
>> DATA_TABLE
>>
>> 1565358267566
>>
>> a2cfe11952f3701d340069f80e2a82b7
>>
>> 1565358135292
>>
>> -1
>>
>> false
>>
>> so, let's imagine that I want to repair my index and don't want to run
>> full rebuild (huge table).
>>
>> What's the best option ?
>>
>> Two things came to my mind:
>>
>> - Find a row in data table, and upset necessary data to index table.
>>
>> - Find a row in data table,  export it then drop it, and then insert it
>> again.
>>
>> And the main question - how can I get a value from data or index table by
>> Primary Key hash ?
>>
>>
>> --
>> Aleksandr Saraseka
>> DBA
>> 380997600401
>>  *•*  asaras...@eztexting.com  *•*  eztexting.com
>> <http://eztexting.com/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> <http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>> <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>>
>>

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Phoenix Index Scrutiny Tool

2019-08-09 Thread Aleksandr Saraseka
Hello community!
I'm testing scrutiny tool to check index consistency.
I hard-deleted from HBase a couple of rows from global index, then ran
Scrutiny tool, it showed me some output like:


SOURCE_TABLE

TARGET_TABLE

SCRUNITY_EXECUTE_TIME

SOURCE_ROW_PK_HASH

SOURCE_TS

TARGET_TS

HAS_TARGET_ROW

INDEX_TABLE

DATA_TABLE

1565358267566

8a74d1f8286a7ec7ce99b22ee0723ab1

1565358171998

-1

false

INDEX_TABLE

DATA_TABLE

1565358267566

a2cfe11952f3701d340069f80e2a82b7

1565358135292

-1

false

so, let's imagine that I want to repair my index and don't want to run full
rebuild (huge table).

What's the best option ?

Two things came to my mind:

- Find a row in data table, and upset necessary data to index table.

- Find a row in data table,  export it then drop it, and then insert it
again.

And the main question - how can I get a value from data or index table by
Primary Key hash ?


-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Phoenix with multiple HBase masters

2019-08-07 Thread Aleksandr Saraseka
I didn't try this personally, but according to the documentation
https://phoenix.apache.org/server.html it has possibility to have ZK-based
load-balancing. Please refer to the "Load balancing" section at the bottom.

On Wed, Aug 7, 2019 at 9:14 PM jesse  wrote:

> Thank you all, very helpful information.
>
> 1) for server side ELB, what is the PQS health check url path?
>
> 2) Does Phoenix client library support client-side load-balancing? i. e
> client gets list of PQS addresses from ZK, and performs load balancing.
> Thus ELB won't be needed.
>
>
>
> On Wed, Aug 7, 2019, 9:01 AM Josh Elser  wrote:
>
>> Great answer, Aleksandr!
>>
>> Also worth mentioning there is only ever one active HBase Master at a
>> time. If you have multiple started, one will be active as the master and
>> the rest will be waiting as a standby in case the current active master
>> dies for some reason (expectedly or unexpectedly).
>>
>> On 8/7/19 9:55 AM, Aleksandr Saraseka wrote:
>> > Hello.
>> > - Phoenix libs should be installed only on RegionServers.
>> > - QueryServer - it's up to you, PQS can be installed anywhere you want
>> > - No. QueryServer is using ZK quorum to get everything it needs
>> > - If you need to balance traffic with multiply PQSs - then yes, but
>> > again - it's up to you. It is not required multiply PQSs if you have
>> > multiply HBase masters.
>> >
>> > On Wed, Aug 7, 2019 at 12:58 AM jesse > > <mailto:chat2je...@gmail.com>> wrote:
>> >
>> > Our cluster used to have one hbase master, now a secondary is added.
>> > For phonenix, what changes should we make?
>> >   - do we have to install new hbase libraries on the new hbase
>> > master node?
>> > - do we need to install new query server on the hbase master?
>> > - any configuration changes should we make?
>> > - do we need an ELB for the query server?
>> >
>> > Thanks
>> >
>> >
>> >
>> >
>> >
>> > --
>> >   Aleksandr Saraseka
>> > DBA
>> > 380997600401
>> >  *•* asaras...@eztexting.com
>> > <mailto:asaras...@eztexting.com> *•* eztexting.com
>> > <
>> http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> >
>> >
>> > <
>> http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
>>
>> > <
>> https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature
>> >
>> >
>>
>

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Phoenix with multiple HBase masters

2019-08-07 Thread Aleksandr Saraseka
Hello.
- Phoenix libs should be installed only on RegionServers.
- QueryServer - it's up to you, PQS can be installed anywhere you want
- No. QueryServer is using ZK quorum to get everything it needs
- If you need to balance traffic with multiply PQSs - then yes, but again -
it's up to you. It is not required multiply PQSs if you have multiply HBase
masters.

On Wed, Aug 7, 2019 at 12:58 AM jesse  wrote:

> Our cluster used to have one hbase master, now a secondary is added. For
> phonenix, what changes should we make?
>  - do we have to install new hbase libraries on the new hbase master node?
> - do we need to install new query server on the hbase master?
> - any configuration changes should we make?
> - do we need an ELB for the query server?
>
> Thanks
>
>
>
>

-- 
Aleksandr Saraseka
DBA
380997600401
 *•*  asaras...@eztexting.com  *•*  eztexting.com
<http://eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.youtube.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.instagram.com/ez_texting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: Secondary Indexes - Missing Data in Phoenix

2019-07-29 Thread Aleksandr Saraseka
Hello Alex.
Please refer to this JIRA https://issues.apache.org/jira/browse/PHOENIX-1734 .
Since v4.8 local index it's just a shadow CF within data table.

On Fri, Jul 26, 2019 at 5:08 AM Alexander Lytchier <
alexanderlytch...@m800.com> wrote:

> Thanks for the reply.
>
> We will attempt to update to Phoenix 4.14.X and re-try adding secondary
> indexes.
>
> Can you help to clarify “local indexes are stored in the same table as the
> data”. When a local index is created in Phoenix I observe that a new table
> is created in HBase *_LOCAL_IDX_TABLE_NAME*. It was my assumption that
> this is where the columns for the index table are stored, along with the PK
> values? Moreover using *EXPLAIN* in Phoenix I can see that it will
> attempt to SCAN OVER *_LOCAL_IDX_TABLE_NAME* when my query is using the
> index.
>
>
>
> On 2019/07/25 14:00:25, Josh Elser  wrote:
>
> > Local indexes are stored in the same table as the data. They are "local"
> >
>
> > to the data.>
>
> >
>
> > I would not be surprised if you are running into issues because you are
> >
>
> > using such an old version of Phoenix.>
>
> >
>
> > On 7/24/19 10:35 PM, Alexander Lytchier wrote:>
>
> > > Hi,>
>
> > > >
>
> > > We are currently using Cloudera as a package manager for our Hadoop >
>
> > > Cluster with Phoenix 4.7.0 (CLABS_PHOENIX)and HBase 1.2.0-cdh5.7.6. >
>
> > > Phoenix 4.7.0 appears to be the latest version supported >
>
> > > (http://archive.cloudera.com/cloudera-labs/phoenix/parcels/latest/)
> even >
>
> > > though it’s old.>
>
> > > >
>
> > > The table in question has a binary row-key: pk BINARY(30): 1 Byte for
> >
>
> > > salting, 8 Bytes - timestamp (Long), 20 Bytes - hash result of other >
>
> > > record fields. + 1 extra byte for unknown issue about updating schema
> in >
>
> > > future (not sure if relevant). We are currently facing performance >
>
> > > issues and are attempting to mitigate it by adding secondary indexes.>
>
> > > >
>
> > > When generating a local index synchronously with the following
> command:>
>
> > > >
>
> > > CREATE LOCAL INDEX INDEX_TABLE ON “MyTable” (“cf”.”type”);>
>
> > > >
>
> > > I can see that the resulting index table in Phoenix is populated, in >
>
> > > HBase I can see the row-key of the index table and queries work as
> expected:>
>
> > > >
>
> > > \x00\x171545413\x00 column=cf:cf:type, timestamp=1563954319353, >
>
> > > value=1545413>
>
> > > >
>
> > > \x00\x00\x00\x01b\xB2s\xDB>
>
> > > >
>
> > > @\x1B\x94\xFA\xD4\x14c\x0B>
>
> > > >
>
> > > d$\x82\xAD\xE6\xB3\xDF\x06>
>
> > > >
>
> > > \xC9\x07@\xB9\xAE\x00>
>
> > > >
>
> > > However, for the case where the index is created asynchronously, and >
>
> > > then populated using the IndexTool, with the following commands:>
>
> > > >
>
> > > >
>
> > > CREATE LOCAL INDEX INDEX_TABLE ON “MyTable” (“cf”.”type”) ASYNC;>
>
> > > >
>
> > > sudo -u hdfs HADOOP_CLASSPATH=`hbase classpath` hadoop jar >
>
> > >
> /opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hbase/bin/../lib/hbase-client-1.2.0-cdh5.7.1.jar
> >
>
> > > org.apache.phoenix.mapreduce.index.IndexTool --data-table "MyTable" >
>
> > > --index-table INDEX_TABLE --output-path hdfs://nameservice1/>
>
> > > >
>
> > > I get the following row-key in HBase:>
>
> > > >
>
> > > >
>
> > > \x00\x00\x00\x00\x00\x00\x column=cf:cf:type, timestamp=1563954000238,
> >
>
> > > value=1545413>
>
> > > >
>
> > > 00\x00\x00\x00\x00\x00\x00>
>
> > > >
>
> > > \x00\x00\x00\x00\x00\x00\x>
>
> > > >
>
> > > 00\x00\x00\x00\x00\x00\x00>
>
> > > >
>
> > > \x00\x00\x00\x00\x00\x00\x>
>
> > > >
>
> > > 151545413\x00\x00\x>
>
> > > >
>
> > > 00\x00\x01b\xB2s\xDB@\x1B\>
>
> > > >
>
> > > x94\xFA\xD4\x14c\x0Bd$\x82>
>
> > > >
>
> > > \xAD\xE6\xB3\xDF\x06\xC9\x>
>
> > > >
>
> > > 07@\xB9\xAE\x00>
>
> > > >
>
> > > It is has 32 addition

Phoenix backup

2019-06-25 Thread Aleksandr Saraseka
Hello community.
Can you suggest some flow for doing full backup of Phoenix tables ? We're
using global indexes as well, so I wondered how to backup data table +
index table simultaneously to prevent full index rebuilding after
restoration ?

-- 
Aleksandr Saraseka
DBA at EZ Texting
M  380997600401
E  asaras...@eztexting.com
W  http://www.eztexting.com
<http://www.eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>

<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>


Re: PQS + Kerberos problems

2019-05-28 Thread Aleksandr Saraseka
Thank you Josh, that helps a lot.
We have Query Server on a dedicated server and none of existing guides have
an information that we need to have core-site.xml
with hadoop.security.authentication option set to kerberos.

On Tue, May 28, 2019 at 11:59 PM Josh Elser  wrote:

> Make sure you have authorization set up correctly between PQS and HBase.
>
> Specifically, you must have the appropriate Hadoop proxyuser rules set
> up in core-site.xml so that HBase will allow PQS to impersonate the PQS
> end-user.
>
> On 5/14/19 11:04 AM, Aleksandr Saraseka wrote:
> > Hello, I have HBase + PQS 4.14.1
> > If I'm trying to connect by think client - everything works, but if I'm
> > using thin client in PQS logs I can see continuous INFO messages
> > 2019-05-14 13:53:58,701 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=10, retries=35, started=48292 ms ago, cancelled=false, msg=
> > ...
> > 2019-05-14 14:18:41,446 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=33, retries=35, started=510325 ms ago, cancelled=false, msg=
> > 2019-05-14 14:19:01,489 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=34, retries=35, started=530368 ms ago, cancelled=false, msg=
> > ...
> > 2019-05-14 14:18:41,446 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=33, retries=35, started=510325 ms ago, cancelled=false, msg=
> > 2019-05-14 14:19:01,489 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=34, retries=35, started=530368 ms ago, cancelled=false, msg=
> > 2019-05-14 14:19:50,139 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=10, retries=35, started=48480 ms ago, cancelled=false, msg=row
> > 'SYSTEM:CATALOG,,' on table 'hbase:meta' at
> > region=hbase:meta,,1.1588230740, hostname=datanode-001.fqdn.com
> > <http://datanode-001.fqdn.com>,60020,1557323271824, seqNum=0
> > 2019-05-14 14:20:10,333 INFO
> > org.apache.hadoop.hbase.client.RpcRetryingCaller: Call exception,
> > tries=11, retries=35, started=68676 ms ago, cancelled=false, msg=row
> > 'SYSTEM:CATALOG,,' on table 'hbase:meta' at
> > region=hbase:meta,,1.1588230740, hostname=datanode-001.fqdn.com
> > <http://datanode-001.fqdn.com>,60020,1557323271824, seqNum=0
> >
> > *Hbase security logs:*
> > 2019-05-14 14:42:19,524 INFO
> > SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for
> > HTTP/phoenix-queryserver-fqdn@realm.com
> > <mailto:phoenix-queryserver-fqdn@realm.com> (auth:KERBEROS)
> > 2019-05-14 14:42:19,524 INFO
> > SecurityLogger.org.apache.hadoop.hbase.Server: Connection from
> > 10.252.16.253 port: 41040 with version info: version: "1.2.0-cdh5.14.2"
> > url:
> >
> "file:///data/jenkins/workspace/generic-binary-tarball-and-maven-deploy/CDH5.14.2-Packaging-HBase-2018-03-27_13-15-05/hbase-1.2.0-cdh5.14.2"
>
> > revision: "Unknown" user: "jenkins" date: "Tue Mar 27 13:31:54 PDT 2018"
> > src_checksum: "05e6e90e06dd7796f56067208a9bf2aa"
> > 2019-05-14 14:42:29,634 INFO
> > SecurityLogger.org.apache.hadoop.hbase.Server: Auth successful for
> > HTTP/phoenix-queryserver-fqdn@realm.com
> > <mailto:phoenix-queryserver-fqdn@realm.com> (auth:KERBEROS)
> > 2019-05-14 14:42:29,635 INFO
> > SecurityLogger.org.apache.hadoop.hbase.Server: Connection from
> > 10.252.16.253 port: 41046 with version info: version: "1.2.0-cdh5.14.2"
> > url:
> >
> "file:///data/jenkins/workspace/generic-binary-tarball-and-maven-deploy/CDH5.14.2-Packaging-HBase-2018-03-27_13-15-05/hbase-1.2.0-cdh5.14.2"
>
> > revision: "Unknown" user: "jenkins" date: "Tue Mar 27 13:31:54 PDT 2018"
> > src_checksum: "05e6e90e06dd7796f56067208a9bf2aa"
> >
> >
> > *thin client logs:*
> > 19/05/14 14:10:08 DEBUG execchain.MainClientExec: Proxy auth state:
> > UNCHALLENGED
> > 19/05/14 14:10:08 DEBUG http.headers: http-outgoing-0 >> POST / HTTP/1.1
> > 19/05/14 14:10:08 DEBUG http.headers: http-outgoing-0 >> Content-Length:
> 137
> > 19/05/14 14:10:08 DEBUG http.headers: http-outgoing-0 >> Content-Type:
> > application/octet-stream
> > 19/05/14 14:10:08 DEBUG http.headers: http-outgoing-0 >> Host:
> > host-fqdn.com:8765 <http://host-fqdn.com:8765>
> > 19/05/14 14:10:08 DEBUG http.headers: http-outgoing-0 >> Connection:
> > Keep-Alive

PQS + Kerberos problems

2019-05-14 Thread Aleksandr Saraseka
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "Host:
host-fqdn.com:8765[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "Connection:
Keep-Alive[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "User-Agent:
Apache-HttpClient/4.5.2 (Java/1.8.0_161)[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "Accept-Encoding:
gzip,deflate[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "Authorization:
Negotiate
YIICtQYGKwYBBQUCoIICqTCCAqWgDTALBgkqhkiG9xIBAgKhBAMCAXaiggKMBIICiGCCAoQGCSqGSIb3EgECAgEAboICczCCAm+gAwIBBaEDAgEOogcDBQAgo4IBgGGCAXwwggF4oAMCAQWhEBsOREFUQVNZUy5DRi5XVEaiQDA+oAMCAQChNzA1GwRIVFRQGy1kYXRhc3lzLXNlY3VyZS1odWUwMDEtc3RnLmMuY2Ytc3RhZ2UuaW50ZXJuYWyjggEbMIIBF6ADAgESoQMCAQOiggEJBIIBBTmyywjx8OcQBfIt2AccWAVPKmyf/UM5lDgRUs9cUixE35x6wl9EoIX6XTsw2hO4ESCt1fFrt4XU3iLadKSM2HtR1Zlp/Q4rw/sATQe2DOEQUquVg548kkvZ9c/M1PrigwZnv28Ew7CZd8FWtTB4+PFPMzbkBBXAI2pMAYDAvvUsrUXlgWBTdrg118uN07BApmK3qCB7BAjIJzTONxFmxXAx+PHlXAOiWMGxsi8V421411VCclGFG0txoSkf6aDdhzcZdQZhuHaL73+JfUXvS8j32cadLjhi1CKtAC8ddfG6NfBFciYNNa8khybXrxd/KsWb6TRAgV9EsZKkCNNOMga09Vv/7qSB1TCB0qADAgESooHKBIHHn1c00ERMjywGVzN7mBqq/2Kc+Is+Oxgs+L4t8A5uyx7/m9axITKA5Zr/wC2jLcVGH+CHEbcgb4zc9hCRIM7RS7MTHINGrh5oZAO6KS/LlrlfYYWuZNQEGuyToStLvV6UyuSs8T/DU9+l6+Y4jx5paoGx9/3Fuliy/RKTYWT4opWH9F8aVA4WIPJbLEOp+LNLmMjko8nzoeLEBkl7OLDcUTsbgYVy7WfnKnq05SKxZaLnVQPSTfWp0UoPj40r6qt0tqntxaJJBg==[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "[\r][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >> "[\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >>
"?org.apache.calcite.avatica.proto.Requests$OpenConnectionRequest[0x12]F[\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >>
"$5de75f3c-d53d-4a53-b78c-4167156a6b67[0x12][0x10][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >>
"[0x8]password[0x12][0x4]none[0x12][0xc][\n]"
19/05/14 14:10:08 DEBUG http.wire: http-outgoing-0 >>
"[0x4]user[0x12][0x4]none"

*and thin client fails with:*
Tue May 14 14:59:43 UTC 2019,
RpcRetryingCaller{globalStartTime=1557845452306, pause=100, retries=35},
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Call to
data-node001.fqdn.com/ip:60020 failed on local exception:
org.apache.hadoop.hbase.exceptions.ConnectionClosingException: Connection
to datasys-secure-hbase-data001-
stg.c.cf-stage.internal/10.252.20.182:60020 is closing. Call id=69,
waitTime=15

at
org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:157)
at
org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:80)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
Caused by: org.apache.hadoop.hbase.exceptions.ConnectionClosingException:
Call to data-node001.fqdn.com/ip:60020 failed on local exception: org.apac
he.hadoop.hbase.exceptions.ConnectionClosingException: Connection to
data-node001.fqdn.com/ip:60020 is closing. Call id=69, waitTime=15

Firewall is widely open from PQS to all HBase/Hadoop nodes.
Also can someone provide impersonal config for working PQS with Kerberos ?
Maybe I missed something.

-- 
Aleksandr Saraseka
DBA at EZ Texting
M  380997600401
E  asaras...@eztexting.com
W  http://www.eztexting.com
<http://www.eztexting.com?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>

<http://facebook.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://linkedin.com/company/eztexting/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<http://twitter.com/eztexting?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.facebook.com/alex.saraseka?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>
<https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp_medium=email_term=_content=_campaign=signature>