[ 
https://issues.apache.org/jira/browse/HUDI-1248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17341436#comment-17341436
 ] 

Vinoth Chandar edited comment on HUDI-1248 at 5/9/21, 12:55 AM:
----------------------------------------------------------------

At least 2 occurrences 
{code:java}
8034 [ScalaTest-main-running-HoodieSparkSqlWriterSuite] INFO  
org.spark_project.jetty.server.handler.ContextHandler  - Started 
o.s.j.s.ServletContextHandler@4f622d2b{/metrics/json,null,AVAILABLE,@Spark}
- throw hoodie exception when there already exist a table with different name 
with Append Save mode
- test bulk insert dataset with datasource impl
- test insert dataset without precombine field
=====[ 2249 seconds still running ]=====
- test bulk insert dataset with datasource impl multiple rounds
- test basic HoodieSparkSqlWriter functionality with datasource insert for 
COPY_ON_WRITE
- test basic HoodieSparkSqlWriter functionality with datasource insert for 
MERGE_ON_READ
- test HoodieSparkSqlWriter functionality with datasource bootstrap for 
COPY_ON_WRITE
- test HoodieSparkSqlWriter functionality with datasource bootstrap for 
MERGE_ON_READ
- test schema evolution for COPY_ON_WRITE *** FAILED ***
  org.apache.spark.sql.AnalysisException: Intersect can only be performed on 
tables with the same number of columns, but the first table has 4 columns and 
the second table has 3 columns;;
'Intersect false
:- LogicalRDD [_row_key#1025, partition#1026, ts#1027L, new_field#1028], false
+- Project [_row_key#1047, partition#1048, ts#1049L]
   +- Project [_hoodie_file_name#1046, _row_key#1047, partition#1048, ts#1049L]
      +- Project [_hoodie_partition_path#1045, _hoodie_file_name#1046, 
_row_key#1047, partition#1048, ts#1049L]
         +- Project [_hoodie_record_key#1044, _hoodie_partition_path#1045, 
_hoodie_file_name#1046, _row_key#1047, partition#1048, ts#1049L]
            +- Project [_hoodie_commit_seqno#1043, _hoodie_record_key#1044, 
_hoodie_partition_path#1045, _hoodie_file_name#1046, _row_key#1047, 
partition#1048, ts#1049L]
               +- 
Relation[_hoodie_commit_time#1042,_hoodie_commit_seqno#1043,_hoodie_record_key#1044,_hoodie_partition_path#1045,_hoodie_file_name#1046,_row_key#1047,partition#1048,ts#1049L]
 parquet
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:43)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:95)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$12.apply(CheckAnalysis.scala:283)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$12.apply(CheckAnalysis.scala:280)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:280)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:86)
  at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:86)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
 {code}


was (Author: vc):
{code:java}
8034 [ScalaTest-main-running-HoodieSparkSqlWriterSuite] INFO  
org.spark_project.jetty.server.handler.ContextHandler  - Started 
o.s.j.s.ServletContextHandler@4f622d2b{/metrics/json,null,AVAILABLE,@Spark}
- throw hoodie exception when there already exist a table with different name 
with Append Save mode
- test bulk insert dataset with datasource impl
- test insert dataset without precombine field
=====[ 2249 seconds still running ]=====
- test bulk insert dataset with datasource impl multiple rounds
- test basic HoodieSparkSqlWriter functionality with datasource insert for 
COPY_ON_WRITE
- test basic HoodieSparkSqlWriter functionality with datasource insert for 
MERGE_ON_READ
- test HoodieSparkSqlWriter functionality with datasource bootstrap for 
COPY_ON_WRITE
- test HoodieSparkSqlWriter functionality with datasource bootstrap for 
MERGE_ON_READ
- test schema evolution for COPY_ON_WRITE *** FAILED ***
  org.apache.spark.sql.AnalysisException: Intersect can only be performed on 
tables with the same number of columns, but the first table has 4 columns and 
the second table has 3 columns;;
'Intersect false
:- LogicalRDD [_row_key#1025, partition#1026, ts#1027L, new_field#1028], false
+- Project [_row_key#1047, partition#1048, ts#1049L]
   +- Project [_hoodie_file_name#1046, _row_key#1047, partition#1048, ts#1049L]
      +- Project [_hoodie_partition_path#1045, _hoodie_file_name#1046, 
_row_key#1047, partition#1048, ts#1049L]
         +- Project [_hoodie_record_key#1044, _hoodie_partition_path#1045, 
_hoodie_file_name#1046, _row_key#1047, partition#1048, ts#1049L]
            +- Project [_hoodie_commit_seqno#1043, _hoodie_record_key#1044, 
_hoodie_partition_path#1045, _hoodie_file_name#1046, _row_key#1047, 
partition#1048, ts#1049L]
               +- 
Relation[_hoodie_commit_time#1042,_hoodie_commit_seqno#1043,_hoodie_record_key#1044,_hoodie_partition_path#1045,_hoodie_file_name#1046,_row_key#1047,partition#1048,ts#1049L]
 parquet
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.failAnalysis(CheckAnalysis.scala:43)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer.failAnalysis(Analyzer.scala:95)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$12.apply(CheckAnalysis.scala:283)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1$$anonfun$apply$12.apply(CheckAnalysis.scala:280)
  at scala.collection.immutable.List.foreach(List.scala:392)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:280)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$$anonfun$checkAnalysis$1.apply(CheckAnalysis.scala:86)
  at org.apache.spark.sql.catalyst.trees.TreeNode.foreachUp(TreeNode.scala:127)
  at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis$class.checkAnalysis(CheckAnalysis.scala:86)
  at 
org.apache.spark.sql.catalyst.analysis.Analyzer.checkAnalysis(Analyzer.scala:95)
 {code}

> [UMBRELLA] Tests cleanup and fixes
> ----------------------------------
>
>                 Key: HUDI-1248
>                 URL: https://issues.apache.org/jira/browse/HUDI-1248
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: Testing
>            Reporter: sivabalan narayanan
>            Priority: Blocker
>              Labels: hudi-umbrellas
>             Fix For: 0.9.0
>
>
> There are quite few tickets that requires some fixes to tests. Creating this 
> umbrella ticket to track all efforts.
>  
> https://issues.apache.org/jira/browse/HUDI-1055 remove .parquet from tests.
>  https://issues.apache.org/jira/browse/HUDI-1033 ITTestRepairsCommand and 
> TestRepairsCommand
>  https://issues.apache.org/jira/browse/HUDI-1010 memory leak.
>  https://issues.apache.org/jira/browse/HUDI-997 memory leak
>  https://issues.apache.org/jira/browse/HUDI-664 : Adjust Logging levels to 
> reduce verbose log msgs in hudi-client
>  https://issues.apache.org/jira/browse/HUDI-623: Remove 
> UpgradePayloadFromUberToApache
>  https://issues.apache.org/jira/browse/HUDI-541: Replace variables/comments 
> named "data files" to "base file"
>  https://issues.apache.org/jira/browse/HUDI-347: Fix 
> TestHoodieClientOnCopyOnWriteStorage Tests with modular private methods
>  https://issues.apache.org/jira/browse/HUDI-323: Docker demo/integ-test 
> stdout/stderr output only available on process exit
>  https://issues.apache.org/jira/browse/HUDI-284: Need Tests for Hudi handling 
> of schema evolution
>  https://issues.apache.org/jira/browse/HUDI-154: Enable Rollback case in 
> HoodieRealtimeRecordReaderTest.testReader
> https://issues.apache.org/jira/browse/HUDI-1143 timestamp micros. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to