[
https://issues.apache.org/jira/browse/SPARK-48562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17853926#comment-17853926
]
Junqing Li edited comment on SPARK-48562 at 6/11/24 7:10 AM:
-------------------------------------------------------------
[~cloud_fan] Maybe we need to move
*{color:#172b4d}ApplyCharTypePaddin{color}{{{}g{}}}* from *Anlyzer* to *Planer*
to solve this bug.
The *ApplyCharTypePadding* is currently defined in the *Analyzer* layer, it
conflicts with other ResolveRule rules in the {*}Analyzer{*}, as other
ResolveRule rules are one-to-one modification rules, while
*ApplyCharTypePadding* exhibits different behavior.
Therefore, we need to consider refactoring the *ApplyCharTypePadding* rule and
moving it to the *Planner* layer. This can avoid inconsistent behavior in the
Analyzer layer without affecting other logic.
Correct me if I'm wrong. Or maybe any better idea to solve this problem?
was (Author: JIRAUSER304040):
[~cloud_fan] Maybe we need to move
*{color:#172b4d}ApplyCharTypePaddin{color}{{{}g{}}}*{{ from *Anlyzer* to
{*}Planer{*}.}}
The *ApplyCharTypePadding* is currently defined in the *Analyzer* layer, it
conflicts with other ResolveRule rules in the {*}Analyzer{*}, as other
ResolveRule rules are one-to-one modification rules, while
*ApplyCharTypePadding* exhibits different behavior.
Therefore, we need to consider refactoring the *ApplyCharTypePadding* rule and
moving it to the *Planner* layer. This can avoid inconsistent behavior in the
Analyzer layer without affecting other logic.
Correct me if I'm wrong. Or maybe any better idea to solve this problem?
> Writing to JDBC Temporary View Failed
> -------------------------------------
>
> Key: SPARK-48562
> URL: https://issues.apache.org/jira/browse/SPARK-48562
> Project: Spark
> Issue Type: Bug
> Components: SQL
> Affects Versions: 3.4.2, 3.4.0, 3.4.1, 3.5.0, 4.0.0, 3.5.1, 3.4.3
> Reporter: Junqing Li
> Priority: Major
>
> When creating a JDBC temporary view, *ApplyCharTypePadding* would add a
> Project before LogicalRelation if CHAR/VARCHAR column exists and Spark would
> save it as a view plan. Then if we try to write this view, Spark would put
> this view plan to *InsertintoStatement* in *ResolveRelations* which would
> fall {*}PrewriteCheck{*}.
> Adding the following code to *JDBCTableCatalogSuite* would meet this problem.
> {code:java}
> test("test writing temporary jdbc view") {
> withConnection { conn =>
> conn.prepareStatement("""CREATE TABLE "test"."to_drop" (id
> CHAR)""").executeUpdate()
> }
> sql(
> s"""
> CREATE TEMPORARY TABLE jdbcTable
> USING jdbc
> OPTIONS (
> url='$url',
> dbtable='"test"."to_drop"');""")
> sql("INSERT INTO jdbcTable values(1),(2)")
> sql("select * from test.to_drop").show()
> withConnection { conn =>
> conn.prepareStatement("""DROP TABLE "test"."to_drop"""").executeUpdate()
> }
> } {code}
>
> Then we would get the following error.
> {code:java}
> [UNSUPPORTED_INSERT.RDD_BASED] Can't insert into the target. An RDD-based
> table is not allowed. SQLSTATE: 42809;
> 'InsertIntoStatement Project [staticinvoke(class
> org.apache.spark.sql.catalyst.util.CharVarcharCodegenUtils, StringType,
> readSidePadding, ID#0, 1, true, false, true) AS ID#1], false, false, false
> +- LocalRelation [col1#3] {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]