[ 
https://issues.apache.org/jira/browse/SPARK-41838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sandeep Singh updated SPARK-41838:
----------------------------------
    Description: 
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1472, in pyspark.sql.connect.functions.posexplode_outer
Failed example:
    df.select("id", "a_map", posexplode_outer("an_array")).show()
Expected:
    +---+----------+----+----+
    | id|     a_map| pos| col|
    +---+----------+----+----+
    |  1|{x -> 1.0}|   0| foo|
    |  1|{x -> 1.0}|   1| bar|
    |  2|        {}|null|null|
    |  3|      null|null|null|
    +---+----------+----+----+
Got:
    +---+------+----+----+
    | id| a_map| pos| col|
    +---+------+----+----+
    |  1| {1.0}|   0| foo|
    |  1| {1.0}|   1| bar|
    |  2|{null}|null|null|
    |  3|  null|null|null|
    +---+------+----+----+
    <BLANKLINE>{code}

  was:
{code:java}
File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
line 1594, in pyspark.sql.connect.functions.to_json
Failed example:
    df = spark.createDataFrame(data, ("key", "value"))
Exception raised:
    Traceback (most recent call last):
      File 
"/usr/local/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/doctest.py",
 line 1350, in __run
        exec(compile(example.source, filename, "single",
      File "<doctest pyspark.sql.connect.functions.to_json[3]>", line 1, in 
<module>
        df = spark.createDataFrame(data, ("key", "value"))
      File 
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/session.py", line 
252, in createDataFrame
        table = pa.Table.from_pandas(pdf)
      File "pyarrow/table.pxi", line 3475, in pyarrow.lib.Table.from_pandas
      File "/usr/local/lib/python3.10/site-packages/pyarrow/pandas_compat.py", 
line 611, in dataframe_to_arrays
        arrays = [convert_column(c, f)
      File "/usr/local/lib/python3.10/site-packages/pyarrow/pandas_compat.py", 
line 611, in <listcomp>
        arrays = [convert_column(c, f)
      File "/usr/local/lib/python3.10/site-packages/pyarrow/pandas_compat.py", 
line 598, in convert_column
        raise e
      File "/usr/local/lib/python3.10/site-packages/pyarrow/pandas_compat.py", 
line 592, in convert_column
        result = pa.array(col, type=type_, from_pandas=True, safe=safe)
      File "pyarrow/array.pxi", line 316, in pyarrow.lib.array
      File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
      File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
    pyarrow.lib.ArrowInvalid: ("Could not convert 'Alice' with type str: tried 
to convert to int64", 'Conversion failed for column 1 with type object'){code}


> DataFrame.show() fix map printing
> ---------------------------------
>
>                 Key: SPARK-41838
>                 URL: https://issues.apache.org/jira/browse/SPARK-41838
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Connect
>    Affects Versions: 3.4.0
>            Reporter: Sandeep Singh
>            Priority: Major
>
> {code:java}
> File 
> "/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/functions.py", 
> line 1472, in pyspark.sql.connect.functions.posexplode_outer
> Failed example:
>     df.select("id", "a_map", posexplode_outer("an_array")).show()
> Expected:
>     +---+----------+----+----+
>     | id|     a_map| pos| col|
>     +---+----------+----+----+
>     |  1|{x -> 1.0}|   0| foo|
>     |  1|{x -> 1.0}|   1| bar|
>     |  2|        {}|null|null|
>     |  3|      null|null|null|
>     +---+----------+----+----+
> Got:
>     +---+------+----+----+
>     | id| a_map| pos| col|
>     +---+------+----+----+
>     |  1| {1.0}|   0| foo|
>     |  1| {1.0}|   1| bar|
>     |  2|{null}|null|null|
>     |  3|  null|null|null|
>     +---+------+----+----+
>     <BLANKLINE>{code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to