Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20214#discussion_r161254906
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/Dataset.scala ---
    @@ -237,13 +237,17 @@ class Dataset[T] private[sql](
       private[sql] def showString(
           _numRows: Int, truncate: Int = 20, vertical: Boolean = false): 
String = {
         val numRows = _numRows.max(0).min(Int.MaxValue - 1)
    -    val takeResult = toDF().take(numRows + 1)
    +    val newDf = toDF()
    +    val castExprs = newDf.schema.map { f => f.dataType match {
    +      // Since binary types in top-level schema fields have a specific 
format to print,
    +      // so we do not cast them to strings here.
    +      case BinaryType => s"`${f.name}`"
    --- End diff --
    
    can we use dataframe API? which looks more reliable here
    ```
    newDf.logicalPlan.output.map { col =>
      if (col.dataType == BinaryType) {
        Column(col)
      } else {
        Column(col).cast(StringType)
      }
    }
    ```


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to