Sandeep Singh created SPARK-41870:
-------------------------------------
Summary: Handle duplicate columns in `createDataFrame`
Key: SPARK-41870
URL: https://issues.apache.org/jira/browse/SPARK-41870
Project: Spark
Issue Type: Sub-task
Components: Connect
Affects Versions: 3.4.0
Reporter: Sandeep Singh
{code:java}
import array
data = [Row(longarray=array.array("l", [-9223372036854775808, 0,
9223372036854775807]))]
df = self.spark.createDataFrame(data) {code}
Error:
{code:java}
Traceback (most recent call last):
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/tests/test_dataframe.py",
line 1220, in test_create_dataframe_from_array_of_long
df = self.spark.createDataFrame(data)
File
"/Users/s.singh/personal/spark-oss/python/pyspark/sql/connect/session.py", line
260, in createDataFrame
table = pa.Table.from_pylist([row.asDict(recursive=True) for row in _data])
File "pyarrow/table.pxi", line 3700, in pyarrow.lib.Table.from_pylist
File "pyarrow/table.pxi", line 5221, in pyarrow.lib._from_pylist
File "pyarrow/table.pxi", line 3575, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1383, in pyarrow.lib._sanitize_arrays
File "pyarrow/table.pxi", line 1364, in pyarrow.lib._schema_from_arrays
File "pyarrow/array.pxi", line 320, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 144, in
pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Could not convert array('l', [-9223372036854775808,
0, 9223372036854775807]) with type array.array: did not recognize Python value
type when inferring an Arrow data type{code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]