harishchanderramesh commented on issue #1936:
URL: https://github.com/apache/hudi/issues/1936#issuecomment-671019129


   Yes, This happens all the time.
   I am unable to run 'read queries' on the hudi tables.
   Is there a better way to read?
   Am I doing it wrong?
   
   ```
   from pyspark import SparkContext
   from pyspark import SQLContext
   from pyspark import SparkConf
   from pyspark.sql import SparkSession
   from pyspark.sql.types import *
   from pyspark.sql.functions import *
   from pyspark.sql.window import Window
   from pyspark import StorageLevel
   from datetime import datetime
   import time
   from pyspark.sql.functions import lit
   from pyspark.sql import Row
   from pyspark.sql import HiveContext
   
   sc_conf = SparkConf()
   sc =SparkContext()
   sqlContext = HiveContext(sc)
   spark = SparkSession.builder.config(conf=sc_conf).getOrCreate()
   sql = """select count(*) from endpoints_rt"""
   result_df = sqlContext.sql(sql)
   result_df.show()
   
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to