StefanXiepj commented on a change in pull request #30734:
URL: https://github.com/apache/spark/pull/30734#discussion_r543429928
##########
File path:
sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/HiveOrcSourceSuite.scala
##########
@@ -327,4 +327,18 @@ class HiveOrcSourceSuite extends OrcSuite with
TestHiveSingleton {
val df =
readResourceOrcFile("test-data/TestStringDictionary.testRowIndex.orc")
assert(df.where("str < 'row 001000'").count() === 1000)
}
+
+ test("SPARK-33755: Allow creating orc table when row format separator is
defined") {
+ withTable("row_format_orc") {
+ sql(
+ s"""CREATE TABLE row_format_orc(
+ | intField INT,
+ | stringField STRING
+ |)
+ |ROW FORMAT DELIMITED FIELDS TERMINATED BY '002'
Review comment:
Sorry for my late reply. I found this problem when migrating hive's task
to spark. Hive is supported (It's not good, but it's not a problem, we can
ignore it). So I fixed it in version Spark 2.4. Although Orc doesn't need this
delimiter, but I don't think we need to be so strict in syntax. It is more
convenient to migrate tasks from hive to spark.
I will close this PR and re submit a new PR based on Spark 2.4
https://github.com/apache/spark/pull/30785
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]