>From what I understand, the session is a singleton so even if you think you >are creating new instances you are just reusing it.
---- On Wed, 29 Jan 2020 02:24:05 -1100 icbm0...@gmail.com wrote ---- Dear all I already had a python function which is used to query data from HBase and HDFS with given parameters. This function returns a pyspark dataframe and the SparkContext it used. With client's increasing demands, I need to merge data from multiple query. I tested using "union" function to merge the pyspark dataframes returned by different function calls directly and it worked. This surprised me that pyspark dataframe can actually union dataframes from different SparkSession. I am using pyspark 2.3.1 and Python 3.5. I wonder if this is a good practice or I better use same SparkSession for all the query? Best regards -- Sent from: http://apache-spark-user-list.1001560.n3.nabble.com/ --------------------------------------------------------------------- To unsubscribe e-mail: user-unsubscr...@spark.apache.org