Thanks, Istvan.
Is it possible to connect pqs as if we were using pyspark context to connect mysql? Some of these scenes: 1. unable to provide ZK port. 2. do not need to read phoenix in parallel, just read as an input to pyspark context. On 2023/12/23 21:10:24 Istvan Toth wrote: > You can't. > Thin client can only be used as a generic JDBC data source in Spark. > > The point of the connector is improving performance by spreading out the > query with the Spark/MR integration, but the thin client only talks to the > pqs server, and cannot access the cluster otherwise. > > Istvan > > > On Fri, Dec 22, 2023 at 4:58 AM luoc <l...@apache.org> wrote: > > > Hi all, > > > > How can I using pyspark connect PQS with sqlContext? > > > > // fat client > > df = sqlContext.read \ > > .format("org.apache.phoenix.spark") \ > > .option("table", "TABLE1") \ > > .option("zkUrl", "localhost:2181") \ > > .load() > > > > How to do this using the thin client? > > > > > -- > *István Tóth* | Sr. Staff Software Engineer > *Email*: st...@cloudera.com > cloudera.com <https://www.cloudera.com> > [image: Cloudera] <https://www.cloudera.com/> > [image: Cloudera on Twitter] <https://twitter.com/cloudera> [image: > Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera > on LinkedIn] <https://www.linkedin.com/company/cloudera> > ------------------------------ > ------------------------------ >