[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387502#comment-17387502
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

satishkotha merged pull request #3335:
URL: https://github.com/apache/hudi/pull/3335


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-26 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387403#comment-17387403
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

xiarixiaoyao commented on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-886772703


   @garyli1019  thanks .@satishkotha could you pls help me to review this pr


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387034#comment-17387034
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

garyli1019 commented on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-886352624


   > @garyli1019 could you help me to review this pr, thanks
   
   @xiarixiaoyao Thanks for your contribution. I am not quite familiar with the 
clustering code. Might need help from @satishkotha 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-25 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386989#comment-17386989
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

xiarixiaoyao commented on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-886308101


   @garyli1019 could you help me to review this pr, thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386152#comment-17386152
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

hudi-bot edited a comment on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-885523651


   
   ## CI report:
   
   * 9bebeaf2c723810d6c6d5df00e4d6f36b4f478e4 Azure: 
[FAILURE](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1129)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386122#comment-17386122
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

hudi-bot edited a comment on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-885523651


   
   ## CI report:
   
   * 9bebeaf2c723810d6c6d5df00e4d6f36b4f478e4 Azure: 
[PENDING](https://dev.azure.com/apache-hudi-ci-org/785b6ef4-2f42-4a89-8f0e-5f0d7039a0cc/_build/results?buildId=1129)
 
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386121#comment-17386121
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

hudi-bot commented on pull request #3335:
URL: https://github.com/apache/hudi/pull/3335#issuecomment-885523651


   
   ## CI report:
   
   * 9bebeaf2c723810d6c6d5df00e4d6f36b4f478e4 UNKNOWN
   
   
   Bot commands
 @hudi-bot supports the following commands:
   
- `@hudi-bot run travis` re-run the last Travis build
- `@hudi-bot run azure` re-run the last Azure build
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HUDI-2214) residual temporary files after clustering are not cleaned up

2021-07-23 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/HUDI-2214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17386116#comment-17386116
 ] 

ASF GitHub Bot commented on HUDI-2214:
--

xiarixiaoyao opened a new pull request #3335:
URL: https://github.com/apache/hudi/pull/3335


   ## *Tips*
   - *Thank you very much for contributing to Apache Hudi.*
   - *Please review https://hudi.apache.org/contributing.html before opening a 
pull request.*
   
   ## What is the purpose of the pull request
   
   residual temporary files after clustering are not cleaned up
   
   // test step
   
   step1: do clustering
   
   val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
   val inputDF1: Dataset[Row] = 
spark.read.json(spark.sparkContext.parallelize(records1, 2))
   inputDF1.write.format("org.apache.hudi")
   .options(commonOpts)
   .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
   .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
   // option for clustering
   .option("hoodie.parquet.small.file.limit", "0")
   .option("hoodie.clustering.inline", "true")
   .option("hoodie.clustering.inline.max.commits", "1")
   .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
"1073741824")
   .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
   .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
Long.MaxValue.toString)
   .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
String.valueOf(12 *1024 * 1024L))
   .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
begin_lon")
   .mode(SaveMode.Overwrite)
   .save(basePath)
   
   step2: check the temp dir, we find 
/tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
   
   /tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208 
   
   is not cleaned up.
   

   
   ## Brief change log
   
   *(for example:)*
 - *Modify AnnotationLocation checkstyle rule in checkstyle.xml*
   
   ## Verify this pull request
   
   ut added
   
   ## Committer checklist
   
- [ ] Has a corresponding JIRA in PR title & commit

- [ ] Commit message is descriptive of the change

- [ ] CI is green
   
- [ ] Necessary doc changes done or have another open PR
  
- [ ] For large changes, please consider breaking it into sub-tasks under 
an umbrella JIRA.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@hudi.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> residual temporary files after clustering are not cleaned up
> 
>
> Key: HUDI-2214
> URL: https://issues.apache.org/jira/browse/HUDI-2214
> Project: Apache Hudi
>  Issue Type: Bug
>  Components: Cleaner
>Affects Versions: 0.8.0
> Environment: spark3.1.1
> hadoop3.1.1
>Reporter: tao meng
>Assignee: tao meng
>Priority: Major
> Fix For: 0.10.0
>
>
> residual temporary files after clustering are not cleaned up
> // test step
> step1: do clustering
> val records1 = recordsToStrings(dataGen.generateInserts("001", 1000)).toList
> val inputDF1: Dataset[Row] = 
> spark.read.json(spark.sparkContext.parallelize(records1, 2))
> inputDF1.write.format("org.apache.hudi")
>  .options(commonOpts)
>  .option(DataSourceWriteOptions.OPERATION_OPT_KEY.key(), 
> DataSourceWriteOptions.BULK_INSERT_OPERATION_OPT_VAL)
>  .option(DataSourceWriteOptions.TABLE_TYPE_OPT_KEY.key(), 
> DataSourceWriteOptions.MOR_TABLE_TYPE_OPT_VAL)
>  // option for clustering
>  .option("hoodie.parquet.small.file.limit", "0")
>  .option("hoodie.clustering.inline", "true")
>  .option("hoodie.clustering.inline.max.commits", "1")
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> "1073741824")
>  .option("hoodie.clustering.plan.strategy.small.file.limit", "629145600")
>  .option("hoodie.clustering.plan.strategy.max.bytes.per.group", 
> Long.MaxValue.toString)
>  .option("hoodie.clustering.plan.strategy.target.file.max.bytes", 
> String.valueOf(12 *1024 * 1024L))
>  .option("hoodie.clustering.plan.strategy.sort.columns", "begin_lat, 
> begin_lon")
>  .mode(SaveMode.Overwrite)
>  .save(basePath)
> step2: check the temp dir, we find 
> /tmp/junit1835474867260509758/dataset/.hoodie/.temp/ is not empty
> {color:#FF}/tmp/junit1835474867260509758/dataset/.hoodie/.temp/20210723171208
>  {color}
> is not cleaned up.
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)