Aleksandr and Prathap,

Upgrades are done in Phoenix as they always have been. You should deploy the new version of phoenix-server jars to HBase, and then the first time a client connects with the Phoenix JDBC driver, that client will trigger an update to any system tables schema.

As such, you need to make sure that this client has permission to alter the phoenix system tables that exist, often requiring admin-level access to hbase. Your first step should be collecting DEBUG log from your Phoenix JDBC client on upgrade.

Please also remember that 5.0.0 is pretty old at this point -- we're overdue for a 5.1.0. There may be existing issues that have already been fixed around the upgrade. Doing a search on Jira if you've not done so already is important.

On 1/29/20 4:30 AM, Aleksandr Saraseka wrote:
Hello.
I'm second on this.
We upgraded phoenix from 4.14.0 to 5.0.0 (with all underlying things like hdfs, hbase) and have the same problem.

We are using queryserver + thin-client
So on PQS side we have:
2020-01-29 09:24:21,579 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading metadata to add parent links for indexes on views 2020-01-29 09:24:21,615 INFO org.apache.phoenix.util.UpgradeUtil: Upgrading metadata to add parent to child links for views 2020-01-29 09:24:21,628 INFO org.apache.hadoop.hbase.client.ConnectionImplementation: Closing master protocol: MasterService 2020-01-29 09:24:21,631 INFO org.apache.phoenix.log.QueryLoggerDisruptor: Shutting down QueryLoggerDisruptor..

On client side:
java.lang.RuntimeException: org.apache.phoenix.schema.TableNotFoundException: ERROR 1012 (42M03): Table undefined. tableName=SYSTEM.CHILD_LINK

Can you point me to upgrade guide for Phoenix ? I tried to find it by myself and have no luck.

On Thu, Jan 16, 2020 at 1:08 PM Prathap Rajendran <prathap...@gmail.com <mailto:prathap...@gmail.com>> wrote:

    Hi All,

    Thanks for the quick update. Still we have some clarification about
    the context.

    Actually we are upgrading from the below version
    Source      : apache-phoenix-4.14.0-cdh5.14.2
    Destination: apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz
    
<http://csfci.ih.lucent.com/~prathapr/phoenix62/apache-phoenix-5.0.0-HBase-2.0-bin.tar.gz>

    Just FYI, we have already upgraded to Hbase  2.0.

    Still we are facing the issue below, Once we create this table
    manually, then there is no issues to run DML operations.
       >     org.apache.hadoop.hbase.TableNotFoundException:
    SYSTEM.CHILD_LINK

    Please let me know if any steps/documents for phoenix upgrade from
    4.14 to 5.0.

    Thanks,
    Prathap


    On Tue, Jan 14, 2020 at 11:34 PM Josh Elser <els...@apache.org
    <mailto:els...@apache.org>> wrote:

        (with VP-Phoenix hat on)

        This is not an official Apache Phoenix release, nor does it
        follow the
        ASF trademarks/branding rules. I'll be following up with the
        author to
        address the trademark violations.

        Please direct your questions to the author of this project.
        Again, it is
        *not* Apache Phoenix.

        On 1/14/20 12:37 PM, Geoffrey Jacoby wrote:
         > Phoenix 5.1 doesn't actually exist yet, at least not at the
        Apache
         > level. We haven't released it yet. It's possible that a
        vendor or user
         > has cut an unofficial release off one of our
        development branches, but
         > that's not something we can give support on. You should
        contact your
         > vendor.
         >
         > Also, since I see you're upgrading from Phoenix 4.14 to 5.1:
        The 4.x
         > branch of Phoenix is for HBase 1.x systems, and the 5.x
        branch is for
         > HBase 2.x systems. If you're upgrading from a 4.x to a 5.x,
        make sure
         > that you also upgrade your HBase. If you're still on HBase
        1.x, we
         > recently released Phoenix 4.15, which does have a supported
        upgrade path
         > from 4.14 (and a very similar set of features to what 5.1 will
         > eventually get).
         >
         > Geoffrey
         >
         > On Tue, Jan 14, 2020 at 5:23 AM Prathap Rajendran
        <prathap...@gmail.com <mailto:prathap...@gmail.com>
         > <mailto:prathap...@gmail.com <mailto:prathap...@gmail.com>>>
        wrote:
         >
         >     Hello All,
         >
         >     We are trying to upgrade the phoenix version from
         >     "apache-phoenix-4.14.0-cdh5.14.2" to
        "APACHE_PHOENIX-5.1.0-cdh6.1.0."
         >
         >     I couldn't find out any upgrade steps for the same.
        Please help me
         >     out to get any documents available.
         >     *_Note:_*
         >     I have downloaded the below phoenix parcel and trying to
        access some
         >     DML operation. I am getting the following error
         >
         >
        
https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel
        
<https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>
>  <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel <https://github.com/dmilan77/cloudera-phoenix/releases/download/5.1.0-HBase-2.0-cdh6.1.0/APACHE_PHOENIX-5.1.0-cdh6.1.0.p1.0-el7.parcel>>
         >
         >     *_Error:_*
         >     20/01/13 04:22:41 WARN client.HTable: Error calling
        coprocessor
         >     service
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService
         >     for row \x00\x00WEB_STAT
         >     java.util.concurrent.ExecutionException:
         >     org.apache.hadoop.hbase.TableNotFoundException:
         >     org.apache.hadoop.hbase.TableNotFoundException:
        SYSTEM.CHILD_LINK
         >              at
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegionInMeta(ConnectionImplementation.java:860)
         >              at
>  org.apache.hadoop.hbase.client.ConnectionImplementation.locateRegion(ConnectionImplementation.java:755)
         >              at
>  org.apache.hadoop.hbase.client.ConnectionUtils$ShortCircuitingClusterConnection.locateRegion(ConnectionUtils.java:137)
         >              at
>  org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:326)
         >              at
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:153)
         >              at
>  org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:58)
         >              at
>  org.apache.hadoop.hbase.client.RpcRetryingCallerImpl.callWithoutRetries(RpcRetryingCallerImpl.java:192)
         >              at
>  org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:267)
         >              at
>  org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:435)
         >              at
>  org.apache.hadoop.hbase.client.ClientScanner.nextWithSyncCache(ClientScanner.java:310)
         >              at
>  org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:595)
         >              at
>  org.apache.phoenix.coprocessor.ViewFinder.findRelatedViews(ViewFinder.java:94)
         >              at
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.dropChildViews(MetaDataEndpointImpl.java:2488)
         >              at
>  org.apache.phoenix.coprocessor.MetaDataEndpointImpl.createTable(MetaDataEndpointImpl.java:2083)
         >              at
>  org.apache.phoenix.coprocessor.generated.MetaDataProtos$MetaDataService.callMethod(MetaDataProtos.java:17053)
         >              at
>  org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:8218)
         >              at
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execServiceOnRegion(RSRpcServices.java:2423)
         >              at
>  org.apache.hadoop.hbase.regionserver.RSRpcServices.execService(RSRpcServices.java:2405)
         >              at
>  org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:42010)*_
         >     _*
         >
         >     Thanks,
         >     Prathap
         >



--
                Aleksandr Saraseka
DBA
380997600401
<tel:380997600401> *•* asaras...@eztexting.com <mailto:asaras...@eztexting.com> *•* eztexting.com <http://eztexting.com?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

<http://facebook.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <http://linkedin.com/company/eztexting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <http://twitter.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <https://www.youtube.com/eztexting?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <https://www.instagram.com/ez_texting/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <https://www.facebook.com/alex.saraseka?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature> <https://www.linkedin.com/in/alexander-saraseka-32616076/?utm_source=WiseStamp&utm_medium=email&utm_term=&utm_content=&utm_campaign=signature>

Reply via email to