itholic opened a new pull request, #39786:
URL: https://github.com/apache/spark/pull/39786

   ### What changes were proposed in this pull request?
   
   This PR proposes to allow `columns` parameter when creating `ps.DataFrame` 
with `ps.Series` with limited condition.
   
   ### Why are the changes needed?
   
   In pandas, they attach the new column consists with missing values when 
`columns` contains more than 2 columns including valid column:
   
   ```python
   >>> pser  # pandas Series
   0.427027    1
   0.904592    2
   0.599768    3
   Name: x, dtype: int64
   
   >>> pd.DataFrame(pser, columns=["x", "y", "z"])
             x    y    z
   0.427027  1  NaN  NaN
   0.904592  2  NaN  NaN
   0.599768  3  NaN  NaN
   ```
   
   But this method is potentially pretty expensive in pandas API on Spark, so I 
guess that's why we currently don't support it.
   
   However, I've seen examples of using the following:
   
   
   ```python
   >>> ps.DataFrame(pser, columns=["x"])
             x
   0.427027  1
   0.904592  2
   0.599768  3
   ```
   
   As shown in the example above, this just works the same as 
`pd.DataFrame(pser)` (without `columns`).
   
   But it fails with `ps.Series` as below:
   
   ```python
   >>> ps.DataFrame(psser, columns=["x"])  # `psser` is pandas-on-Spark Series
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
     File ".../spark/python/pyspark/pandas/frame.py", line 539, in __init__
       assert columns is None
   AssertionError 
   ```
   
   In this case, user might just want to clearly state column names in their 
code, so I believe we can allow this rather than raising an `AssertionError`.
   
   ### Does this PR introduce _any_ user-facing change?
   
   **Before**
   ```python
   >>> ps.DataFrame(psser, columns=["x"])  # `psser` is pandas-on-Spark Series
   Traceback (most recent call last):
     File "<stdin>", line 1, in <module>
     File ".../spark/python/pyspark/pandas/frame.py", line 539, in __init__
       assert columns is None
   AssertionError 
   ```
   
   **After**
   ```python
   >>> ps.DataFrame(psser, columns=["x"])  # `psser` is pandas-on-Spark Series
             x
   0.427027  1
   0.904592  2
   0.599768  3
   ```
   
   
   ### How was this patch tested?
   <!--
   If tests were added, say they were added here. Please make sure to add some 
test cases that check the changes thoroughly including negative and positive 
cases if possible.
   If it was tested in a way different from regular unit tests, please clarify 
how you tested step by step, ideally copy and paste-able, so that other 
reviewers can test and check, and descendants can verify in the future.
   If tests were not added, please describe why they were not added and/or why 
it was difficult to add.
   If benchmark tests were added, please run the benchmarks in GitHub Actions 
for the consistent environment, and the instructions could accord to: 
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
   -->
   
   Added UTs.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to