This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
     new c89c5ef  [CARBONDATA-3886] Use qualified table name for global sort 
compaction
c89c5ef is described below

commit c89c5ef42238648ac9f4adfbd53bd89c6c9198da
Author: ajantha-bhat <ajanthab...@gmail.com>
AuthorDate: Fri Jul 3 09:57:46 2020 +0530

    [CARBONDATA-3886] Use qualified table name for global sort compaction
    
    Why is this PR needed?
    Global sort compaction is not using database name while making dataframe.
    some times it uses default database when spark cannot match this table name 
belong to which database.
    
    What changes were proposed in this PR?
    Use qualified table name (dbname + table name) while creating a datafame.
    
    Does this PR introduce any user interface change?
    No
    
    Is any new testcase added?
    No
    
    This closes #3821
---
 .../spark/src/main/scala/org/apache/spark/sql/util/SparkSQLUtil.scala | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git 
a/integration/spark/src/main/scala/org/apache/spark/sql/util/SparkSQLUtil.scala 
b/integration/spark/src/main/scala/org/apache/spark/sql/util/SparkSQLUtil.scala
index 8bf3483..2101090 100644
--- 
a/integration/spark/src/main/scala/org/apache/spark/sql/util/SparkSQLUtil.scala
+++ 
b/integration/spark/src/main/scala/org/apache/spark/sql/util/SparkSQLUtil.scala
@@ -165,9 +165,9 @@ object SparkSQLUtil {
      * datatype of column data and corresponding datatype in schema provided 
to create dataframe.
      * Since carbonScanRDD gives Long data for timestamp column and 
corresponding column datatype in
      * schema is Timestamp, this validation fails if we use createDataFrame 
API which takes rdd as
-     * input. Hence, using below API which creates dataframe from tablename.
+     * input. Hence, using below API which creates dataframe from qualified 
tablename.
      */
-    sparkSession.sqlContext.table(carbonTable.getTableName)
+    sparkSession.sqlContext.table(carbonTable.getDatabaseName + "." + 
carbonTable.getTableName)
   }
 
   def setOutputMetrics(outputMetrics: OutputMetrics, dataLoadMetrics: 
DataLoadMetrics): Unit = {

Reply via email to