Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20163#discussion_r161363004
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/python/EvaluatePython.scala
 ---
    @@ -144,6 +145,7 @@ object EvaluatePython {
         }
     
         case StringType => (obj: Any) => nullSafeConvert(obj) {
    +      case _: Calendar => null
           case _ => UTF8String.fromString(obj.toString)
    --- End diff --
    
    So, for now .. I think it's fine as a small fix as is ... We are going to 
document that the return type and return value should be matched anyway ..
    
    So, expected return values will be:
    
    ```python
    # Mapping Python types to Spark SQL DataType
    _type_mappings = {
        type(None): NullType,
        bool: BooleanType,
        int: LongType,
        float: DoubleType,
        str: StringType,
        bytearray: BinaryType,
        decimal.Decimal: DecimalType,
        datetime.date: DateType,
        datetime.datetime: TimestampType,
        datetime.time: TimestampType,
    }
    ```
    
    Seems, we can also check if the string conversion looks reasonable and then 
blacklist `net.razorvine.pickle.objects.Time` if not ...
    
    How does this sound to you @cloud-fan and @rednaxelafx?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to