Kingsley Chen created CARBONDATA-1168:
-----------------------------------------

             Summary: Driver Delete data operation is failed due to failure in 
creating delete delta file for segment
                 Key: CARBONDATA-1168
                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1168
             Project: CarbonData
          Issue Type: Bug
          Components: sql
    Affects Versions: 1.1.0
         Environment: spark1.6+carbon1.1.0
we have a 20 nodes cluster and 32GB ram each
            Reporter: Kingsley Chen
             Fix For: NONE


We use spark code to delete data from table as below
------------------spark code----------------------
val deleteSql = s"DELETE FROM $tableName WHERE $rowkeyName IN 
(${rowKeyVals.mkString(",")})"
    cc.sql(deleteSql).show()
------------------spark code----------------------

when the array size of rowKeyVals is larger than 200, the delete operation will 
be failed, and print the log:
Delete data request has been received for default.item
Delete data operation is failed for default.item
Driver Delete data operation is failed due to failure in creating delete delta 
file for segment : null block : null
++
||
++
++

That is to say, it only delete success maximun at 200 a batch,and took about 
1min which is
too slow. So my question is how to tuning the performance to make the batch 
larger 
and delete faster







--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to