XorSum opened a new issue, #6581:
URL: https://github.com/apache/kyuubi/issues/6581

   ### Code of Conduct
   
   - [X] I agree to follow this project's [Code of 
Conduct](https://www.apache.org/foundation/policies/conduct)
   
   
   ### Search before asking
   
   - [X] I have searched in the 
[issues](https://github.com/apache/kyuubi/issues?q=is%3Aissue) and found no 
similar issues.
   
   
   ### Describe the bug
   
   Current zorder implementation does not support columns with special 
characters. 
   
   If we try to create a table with special characters in column names and then 
optimize it with zorder, it will throw an error.
   
   ```sql 
   CREATE TABLE up (c1 INT, `@c2` INT, c3 INT);
   OPTIMIZE up ZORDER BY c1, `@c2`;
   ```
   
   ```
   Column '```@c2```' does not exist. Did you mean one of the following? 
[spark_catalog.default.up.c1, spark_catalog.default.up.c3, 
spark_catalog.default.up.@c2]; line 1 pos 0;
   'OptimizeZorderStatement [up]
   +- 'Sort [zorder(c1#15, '`@c2`) ASC NULLS LAST], true
      +- Project [c1#15, @c2#16, c3#17]
         +- SubqueryAlias spark_catalog.default.up
            +- HiveTableRelation [`default`.`up`, 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [c1#15, @c2#16, 
c3#17], Partition Cols: []]
   
   org.apache.spark.sql.AnalysisException: Column '```@c2```' does not exist. 
Did you mean one of the following? [spark_catalog.default.up.c1, 
spark_catalog.default.up.c3, spark_catalog.default.up.@c2]; line 1 pos 0;
   'OptimizeZorderStatement [up]
   +- 'Sort [zorder(c1#15, '`@c2`) ASC NULLS LAST], true
      +- Project [c1#15, @c2#16, c3#17]
         +- SubqueryAlias spark_catalog.default.up
            +- HiveTableRelation [`default`.`up`, 
org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, Data Cols: [c1#15, @c2#16, 
c3#17], Partition Cols: []]
   
        at 
org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:54)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$7(CheckAnalysis.scala:200)
        at 
org.apache.spark.sql.catalyst.analysis.CheckAnalysis.$anonfun$checkAnalysis$7$adapted(CheckAnalysis.scala:193)
   ```
   
   
   ### Affects Version(s)
   
   1.10.0-SNAPSHOT
   
   ### Kyuubi Server Log Output
   
   _No response_
   
   ### Kyuubi Engine Log Output
   
   _No response_
   
   ### Kyuubi Server Configurations
   
   _No response_
   
   ### Kyuubi Engine Configurations
   
   _No response_
   
   ### Additional context
   
   _No response_
   
   ### Are you willing to submit PR?
   
   - [X] Yes. I would be willing to submit a PR with guidance from the Kyuubi 
community to fix.
   - [ ] No. I cannot submit a PR at this time.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: notifications-unsubscr...@kyuubi.apache.org
For additional commands, e-mail: notifications-h...@kyuubi.apache.org

Reply via email to