huaxingao opened a new pull request, #1770:
URL: https://github.com/apache/datafusion-comet/pull/1770

   ## Which issue does this PR close?
   
   <!--
   We generally require a GitHub issue to be filed for all bug fixes and 
enhancements and this helps us generate change logs for our releases. You can 
link an issue to this PR using the GitHub syntax. For example `Closes #123` 
indicates that this PR will close issue #123.
   -->
   
   Closes #.
   
   ## Rationale for this change
   
   Support type widening in Spark 4.0. This PR updates the code to pass the 
following test:
   
https://github.com/apache/spark/blob/master/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/ReadSchemaTest.scala#L340
   ```
   /**
    * Change a column type (Case 4).
    * This suite assumes that a user gives a wider schema intentionally.
    */
   trait IntegralTypeTest extends ReadSchemaTest {
   
     import testImplicits._
   
     private lazy val values = 1 to 10
     private lazy val byteDF = values.map(_.toByte).toDF("col1")
     private lazy val shortDF = values.map(_.toShort).toDF("col1")
     private lazy val intDF = values.toDF("col1")
     private lazy val longDF = values.map(_.toLong).toDF("col1")
   
     test("change column type from byte to short/int/long") {
       withTempPath { dir =>
         val path = dir.getCanonicalPath
   
         byteDF.write.format(format).options(options).save(path)
   
         Seq(
           ("col1 short", shortDF),
           ("col1 int", intDF),
           ("col1 long", longDF)).foreach { case (schema, answerDF) =>
           
checkAnswer(spark.read.schema(schema).format(format).options(options).load(path),
 answerDF)
         }
       }
     }
   
     test("change column type from short to int/long") {
       withTempPath { dir =>
         val path = dir.getCanonicalPath
   
         shortDF.write.format(format).options(options).save(path)
   
         Seq(("col1 int", intDF), ("col1 long", longDF)).foreach { case 
(schema, answerDF) =>
           
checkAnswer(spark.read.schema(schema).format(format).options(options).load(path),
 answerDF)
         }
       }
     }
   
     test("change column type from int to long") {
       withTempPath { dir =>
         val path = dir.getCanonicalPath
   
         intDF.write.format(format).options(options).save(path)
   
         Seq(("col1 long", longDF)).foreach { case (schema, answerDF) =>
           
checkAnswer(spark.read.schema(schema).format(format).options(options).load(path),
 answerDF)
         }
       }
     }
   
     test("read byte, int, short, long together") {
       withTempPath { dir =>
         val path = dir.getCanonicalPath
   
         val byteDF = (Byte.MaxValue - 2 to 
Byte.MaxValue).map(_.toByte).toDF("col1")
         val shortDF = (Short.MaxValue - 2 to 
Short.MaxValue).map(_.toShort).toDF("col1")
         val intDF = (Int.MaxValue - 2 to Int.MaxValue).toDF("col1")
         val longDF = (Long.MaxValue - 2 to Long.MaxValue).toDF("col1")
         val unionDF = byteDF.union(shortDF).union(intDF).union(longDF)
   
         val byteDir = s"$path${File.separator}part=byte"
         val shortDir = s"$path${File.separator}part=short"
         val intDir = s"$path${File.separator}part=int"
         val longDir = s"$path${File.separator}part=long"
   
         byteDF.write.format(format).options(options).save(byteDir)
         shortDF.write.format(format).options(options).save(shortDir)
         intDF.write.format(format).options(options).save(intDir)
         longDF.write.format(format).options(options).save(longDir)
   
         val df = spark.read
           .schema(unionDF.schema)
           .format(format)
           .options(options)
           .load(path)
           .select("col1")
   
         checkAnswer(df, unionDF)
       }
     }
   }
   ```
   
   ## What changes are included in this PR?
   
   <!--
   There is no need to duplicate the description in the issue here but it is 
sometimes worth providing a summary of the individual changes in this PR.
   -->
   
   ## How are these changes tested?
   
   <!--
   We typically require tests for all PRs in order to:
   1. Prevent the code from being accidentally broken by subsequent changes
   2. Serve as another way to document the expected behavior of the code
   
   If tests are not included in your PR, please explain why (for example, are 
they covered by existing tests)?
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: github-unsubscr...@datafusion.apache.org
For additional commands, e-mail: github-h...@datafusion.apache.org

Reply via email to