Github user holdenk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20678#discussion_r171111018
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1689,6 +1689,10 @@ using the call `toPandas()` and when creating a 
Spark DataFrame from a Pandas Da
     `createDataFrame(pandas_df)`. To use Arrow when executing these calls, 
users need to first set
     the Spark configuration 'spark.sql.execution.arrow.enabled' to 'true'. 
This is disabled by default.
     
    +In addition, optimizations enabled by 'spark.sql.execution.arrow.enabled' 
will fallback automatically
    +to non-optimized implementations if an error occurs. This can be 
controlled by
    --- End diff --
    
    So we need to be clear that we only do this if an error occurs in schema 
parsing, not any error.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to