Yikun commented on a change in pull request #35167:
URL: https://github.com/apache/spark/pull/35167#discussion_r782675144
##########
File path: python/pyspark/pandas/indexing.py
##########
@@ -1311,7 +1312,9 @@ def _select_cols_by_iterable(
% (len(cast(Sized, cols_sel)),
len(self._internal.column_labels))
)
if isinstance(cols_sel, pd.Series):
- if not
cols_sel.index.sort_values().equals(self._psdf.columns.sort_values()):
+ if get_option("compute.eager_check") and not
cols_sel.index.sort_values().equals(
+ self._psdf.columns.sort_values()
Review comment:
Ah, yes, colselect it's okay in here.
and there are some differet behaviour on pandas and pandas on spark. I
hitted this problem because original case pdf[[True, Fasle, True]], in pandas
is rowselect, but for pandas on spark [is
colselect](https://github.com/apache/spark/blob/master/python/pyspark/pandas/indexing.py#L805-L806).
I didn't realize it before.
Should I do a next commits to fix doc to address above comments or maybe
just close this PR?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]