[ 
https://issues.apache.org/jira/browse/SPARK-38067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Maciej Szymkiewicz updated SPARK-38067:
---------------------------------------
    Summary: Inconsistent missing values handling in Pandas on Spark to_json  
(was: Pandas on spark deletes columns with all None as default.)

> Inconsistent missing values handling in Pandas on Spark to_json
> ---------------------------------------------------------------
>
>                 Key: SPARK-38067
>                 URL: https://issues.apache.org/jira/browse/SPARK-38067
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 3.2.1
>            Reporter: Bjørn Jørgensen
>            Priority: Major
>
> With pandas
> {code:java}
> data = {'col_1': [3, 2, 1, 0], 'col_2': [None, None, None, None]}
> test_pd = pd.DataFrame.from_dict(data)
> test_pd.shape
> {code}
> (4, 2)
> {code:java}
> test_pd.to_json("testpd.json")
> test_pd2 = pd.read_json("testpd.json")
> test_pd2.shape
> {code}
> (4, 2)
> Pandas on spark API does delete the column that has all values Null.
> {code:java}
> data = {'col_1': [3, 2, 1, 0], 'col_2': [None, None, None, None]}
> test_ps = ps.DataFrame.from_dict(data)
> test_ps.shape
> {code}
> (4, 2)
> {code:java}
> test_ps.to_json("testps.json")
> test_ps2 = ps.read_json("testps.json/*")
> test_ps2.shape
> {code}
> (4, 1)
> We need to change this to make pandas on spark API be more like pandas.
> I have opened a PR for this.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to