Github user BryanCutler commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20531#discussion_r167026621
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1676,7 +1676,7 @@ Using the above optimizations with Arrow will produce 
the same results as when A
     enabled. Note that even with Arrow, `toPandas()` results in the collection 
of all records in the
     DataFrame to the driver program and should be done on a small subset of 
the data. Not all Spark
     data types are currently supported and an error can be raised if a column 
has an unsupported type,
    -see [Supported Types](#supported-sql-arrow-types). If an error occurs 
during `createDataFrame()`,
    +see [Supported SQL Types](#supported-sql-arrow-types). If an error occurs 
during `createDataFrame()`,
    --- End diff --
    
    Nice catch!


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to