Bjørn Jørgensen created SPARK-38067:
---------------------------------------
Summary: Pandas on spark deletes columns with all None as default.
Key: SPARK-38067
URL: https://issues.apache.org/jira/browse/SPARK-38067
Project: Spark
Issue Type: Bug
Components: PySpark
Affects Versions: 3.2.1
Reporter: Bjørn Jørgensen
With pandas
{code:java}
data = {'col_1': [3, 2, 1, 0], 'col_2': [None, None, None, None]}
test_pd = pd.DataFrame.from_dict(data)
test_pd.shape
{code}
(4, 2)
{code:java}
test_pd.to_json("testpd.json")
test_pd2 = pd.read_json("testpd.json")
test_pd2.shape
{code}
(4, 2)
Pandas on spark API does delete the column that has all values Null.
{code:java}
data = {'col_1': [3, 2, 1, 0], 'col_2': [None, None, None, None]}
test_ps = ps.DataFrame.from_dict(data)
test_ps.shape
{code}
(4, 2)
{code:java}
test_ps.to_json("testps.json")
test_ps2 = ps.read_json("testps.json/*")
test_ps2.shape
{code}
(4, 1)
We need to change this to make pandas on spark API be more like pandas.
I have opened a PR for this.
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]