This is an automated email from the ASF dual-hosted git repository.

jackylk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/carbondata.git


The following commit(s) were added to refs/heads/master by this push:
     new c8cd7e0  [CARBONDATA-3600]Fix the cleanup failure issue if user fails 
to access table
c8cd7e0 is described below

commit c8cd7e06a46e8287f1043f197465e772f840bab1
Author: akashrn5 <akashnilu...@gmail.com>
AuthorDate: Mon Dec 30 18:54:12 2019 +0530

    [CARBONDATA-3600]Fix the cleanup failure issue if user fails to access table
    
    Why is this PR needed?
    
    the code to clean up the stale datamap folders during session 
initialization causes problem, if some other user tries to access the datamap 
table, we might get permission exception.
    
    What changes were proposed in this PR?
    
    We need to clean up only if the table does not exists in hive metastore, 
but the schema exists, we get exception incase the table exists but other user, 
so in such case, no need to go ahead, we can just catch exception and go ahead. 
Same changes are proposed here.
    
    Does this PR introduce any user interface change?
    
    No
    
    Is any new testcase added?
    
    No (Not required as tested in cluster and fix is for clean up issue)
    
    This closes #3548
---
 .../main/scala/org/apache/spark/sql/CarbonEnv.scala    | 18 +++++++++++++-----
 1 file changed, 13 insertions(+), 5 deletions(-)

diff --git 
a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala 
b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
index 0a97f96..571008f 100644
--- a/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
+++ b/integration/spark2/src/main/scala/org/apache/spark/sql/CarbonEnv.scala
@@ -135,17 +135,25 @@ class CarbonEnv {
       dataMapSchema =>
         if (null != dataMapSchema.getRelationIdentifier &&
             !dataMapSchema.isIndexDataMap) {
-          if (!sparkSession.sessionState
-            .catalog
-            
.tableExists(TableIdentifier(dataMapSchema.getRelationIdentifier.getTableName,
-              Some(dataMapSchema.getRelationIdentifier.getDatabaseName)))) {
+          val isTableExists = try {
+            sparkSession.sessionState
+              .catalog
+              
.tableExists(TableIdentifier(dataMapSchema.getRelationIdentifier.getTableName,
+                Some(dataMapSchema.getRelationIdentifier.getDatabaseName)))
+          } catch {
+            // we need to take care of cleanup when the table does not exists, 
if table exists and
+            // some other user tries to access the table, it might fail, that 
time no need to handle
+            case ex: Exception =>
+              LOGGER.error("Error while checking the table existence", ex)
+              return
+          }
+          if (!isTableExists) {
             try {
               
DataMapStoreManager.getInstance().dropDataMapSchema(dataMapSchema.getDataMapName)
             } catch {
               case e: IOException =>
                 throw e
             } finally {
-              
DataMapStoreManager.getInstance.unRegisterDataMapCatalog(dataMapSchema)
               if 
(FileFactory.isFileExist(dataMapSchema.getRelationIdentifier.getTablePath)) {
                 
CarbonUtil.deleteFoldersAndFilesSilent(FileFactory.getCarbonFile(dataMapSchema
                   .getRelationIdentifier

Reply via email to