[ 
https://issues.apache.org/jira/browse/SPARK-50051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ruifeng Zheng resolved SPARK-50051.
-----------------------------------
    Fix Version/s: 4.0.0
       Resolution: Fixed

Issue resolved by pull request 48589
[https://github.com/apache/spark/pull/48589]

> Spark Connect should support str ndarray
> ----------------------------------------
>
>                 Key: SPARK-50051
>                 URL: https://issues.apache.org/jira/browse/SPARK-50051
>             Project: Spark
>          Issue Type: Improvement
>          Components: Connect, PySpark
>    Affects Versions: 4.0.0
>            Reporter: Ruifeng Zheng
>            Assignee: Ruifeng Zheng
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 4.0.0
>
>
> In [5]: spark.range(1).select(sf.lit(np.array(["a", "b"], np.str_))).schema
> ---------------------------------------------------------------------------
> PySparkTypeError                          Traceback (most recent call last)
> Cell In[5], line 1
> ----> 1 spark.range(1).select(sf.lit(np.array(["a", "b"], np.str_))).schema
> File ~/Dev/spark/python/pyspark/sql/utils.py:272, in 
> try_remote_functions.<locals>.wrapped(*args, **kwargs)
>     269 if is_remote() and "PYSPARK_NO_NAMESPACE_SHARE" not in os.environ:
>     270     from pyspark.sql.connect import functions
> --> 272     return getattr(functions, f.__name__)(*args, **kwargs)
>     273 else:
>     274     return f(*args, **kwargs)
> File ~/Dev/spark/python/pyspark/sql/connect/functions/builtin.py:271, in 
> lit(col)
>     269 elif isinstance(col, np.ndarray) and col.ndim == 1:
>     270     if _from_numpy_type(col.dtype) is None:
> --> 271         raise PySparkTypeError(
>     272             errorClass="UNSUPPORTED_NUMPY_ARRAY_SCALAR",
>     273             messageParameters=\{"dtype": col.dtype.name},
>     274         )
>     276     # NumpyArrayConverter for Py4J can not support ndarray with int8 
> values.
>     277     # Actually this is not a problem for Connect, but here still 
> convert it
>     278     # to int16 for compatibility.
>     279     if col.dtype == np.int8:
> PySparkTypeError: [UNSUPPORTED_NUMPY_ARRAY_SCALAR] The type of array scalar 
> 'str32' is not supported.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to