Github user icexelloss commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21427#discussion_r196457433
  
    --- Diff: python/pyspark/worker.py ---
    @@ -110,9 +116,20 @@ def wrapped(key_series, value_series):
                     "Number of columns of the returned pandas.DataFrame "
                     "doesn't match specified schema. "
                     "Expected: {} Actual: {}".format(len(return_type), 
len(result.columns)))
    -        arrow_return_types = (to_arrow_type(field.dataType) for field in 
return_type)
    -        return [(result[result.columns[i]], arrow_type)
    -                for i, arrow_type in enumerate(arrow_return_types)]
    +
    +        if not assign_cols_by_pos:
    +            try:
    +                # Assign result columns by schema name
    +                return [(result[field.name], to_arrow_type(field.dataType))
    +                        for field in return_type]
    +            except KeyError:
    --- End diff --
    
    I think we want to be a little more careful here, for example, an 
`KeyError` in to_arrow_type could lead to unexpected behavior. 
    
    How about sth like this:
    ```
    if any(isinstance(name, basestring) for name in result.columns):
        return [(result[field.name], to_arrow_type(field.dataType)) for field 
in return_type]
    else:
        return [(result.iloc[:,i], to_arrow_type(field.dataType)) for i, field 
in enumerate(return_type)]
    ```
    
    



---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to