viirya commented on a change in pull request #30412:
URL: https://github.com/apache/spark/pull/30412#discussion_r530824544



##########
File path: docs/sql-ref-datatypes.md
##########
@@ -37,6 +37,8 @@ Spark SQL and DataFrames support the following data types:
   - `DecimalType`: Represents arbitrary-precision signed decimal numbers. 
Backed internally by `java.math.BigDecimal`. A `BigDecimal` consists of an 
arbitrary precision integer unscaled value and a 32-bit integer scale.
 * String type
   - `StringType`: Represents character string values.
+  - `VarcharType(length)`: A variant of `StringType` which has a length 
limitation. Data writing will fail if the input string exceeds the length 
limitation. Note: this type can only be used in table schema, not 
functions/operators.
+  - `CharType(length)`: A variant of `VarcharType(length)` which is fixed 
length. Reading column of type `VarcharType(n)` always returns string values of 
length `n`. Char type column comparison will pad the short one to the longer 
length.

Review comment:
       Reading column of type `VarcharType(n)` -> Reading column of type 
`CharType(n)`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to