[ 
https://issues.apache.org/jira/browse/HUDI-2259?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17397809#comment-17397809
 ] 

ASF GitHub Bot commented on HUDI-2259:
--------------------------------------

dongkelun commented on pull request #3380:
URL: https://github.com/apache/hudi/pull/3380#issuecomment-897338826


   > > > Good contribution! @dongkelun , can you run the test for spark3? Using 
the follow
   > > > > mvn clean install -DskipTests -Pspark3
   > > > > mvn test -Punit-tests -Pspark3 -pl hudi-spark-datasource/hudi-spark
   > > 
   > > 
   > > @pengzhiwei2018 Hi, when I run Tests with the above command and the 
result is: 'Tests: Succeeded 66, failed 4, canceled 0, Ignored 0, pending 0', I 
don't know whether to require all Tests to pass.I ran the test in the master 
branch, and the result was ‘Tests: succeeded 66, failed 3, canceled 0, ignored 
0, pending 0’,Is there something wrong with my configured environment?
   > 
   > Hi @dongkelun , Except the test for ORC will failed for spark3, other test 
should be ok.
   > 
   > I found you test case has failed in spark3.
   > 
   > > Test MergeInto For Source Table With ColumnAliases *** FAILED ***�[0
   > 
   > Can you fix this for spark3?
   
   Hi @pengzhiwei2018 ,Ok, I'll try my best


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


> [SQL]Support referencing subquery with column aliases by table alias in merge 
> into
> ----------------------------------------------------------------------------------
>
>                 Key: HUDI-2259
>                 URL: https://issues.apache.org/jira/browse/HUDI-2259
>             Project: Apache Hudi
>          Issue Type: Improvement
>          Components: Spark Integration
>            Reporter: 董可伦
>            Assignee: 董可伦
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 0.9.0
>
>
>  
>  Example:
> {code:java}
> val tableName = "test_hudi_table"
> spark.sql(
> s"""
> create table ${tableName} (
> id int,
> name string,
> price double,
> ts long
> ) using hudi
> options (
> primaryKey = 'id',
> type = 'cow'
> )
> location '/tmp/${tableName}'
> """.stripMargin)
> spark.sql(
> s"""
> merge into $tableName as t0
> using (
> select 1, 'a1', 12, 1003
> ) s0 (id,name,price,ts)
> on s0.id = t0.id
> when matched and id != 1 then update set *
> when matched and s0.id = 1 then delete
> when not matched then insert *
> """.stripMargin)
> {code}
> It will throw an exception:
> {code:java}
> Exception in thread "main" org.apache.spark.sql.AnalysisException: Cannot 
> resolve 's0.id in (`s0.id` = `t0.id`), the input columns is: id#4, name#5, 
> price#6, ts#7, _hoodie_commit_time#8, _hoodie_commit_seqno#9, 
> _hoodie_record_key#10, _hoodie_partition_path#11, _hoodie_file_name#12, 
> id#13, name#14, price#15, ts#16L;Exception in thread "main" 
> org.apache.spark.sql.AnalysisException: Cannot resolve 's0.id in (`s0.id` = 
> `t0.id`), the input columns is: id#4, name#5, price#6, ts#7, 
> _hoodie_commit_time#8, _hoodie_commit_seqno#9, _hoodie_record_key#10, 
> _hoodie_partition_path#11, _hoodie_file_name#12, id#13, name#14, price#15, 
> ts#16L; at 
> org.apache.spark.sql.hudi.analysis.HoodieResolveReferences.org$apache$spark$sql$hudi$analysis$HoodieResolveReferences$$resolveExpressionFrom(HoodieAnalysis.scala:292)
>  at 
> org.apache.spark.sql.hudi.analysis.HoodieResolveReferences$$anonfun$apply$1.applyOrElse(HoodieAnalysis.scala:160)
>  at 
> org.apache.spark.sql.hudi.analysis.HoodieResolveReferences$$anonfun$apply$1.applyOrElse(HoodieAnalysis.scala:103)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1$$anonfun$apply$1.apply(AnalysisHelper.scala:90)
>  at 
> org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:69)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:89)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$$anonfun$resolveOperatorsUp$1.apply(AnalysisHelper.scala:86)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$.allowInvokingTransformsInAnalyzer(AnalysisHelper.scala:194)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper$class.resolveOperatorsUp(AnalysisHelper.scala:86)
>  at 
> org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.resolveOperatorsUp(LogicalPlan.scala:29){code}
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to