pralabhkumar commented on a change in pull request #35191:
URL: https://github.com/apache/spark/pull/35191#discussion_r820885856
##########
File path: python/pyspark/pandas/series.py
##########
@@ -5241,9 +5253,16 @@ def asof(self, where: Union[Any, List]) -> Union[Scalar,
"Series"]:
# The data is expected to be small so it's fine to transpose/use
default index.
with ps.option_context("compute.default_index_type", "distributed",
"compute.max_rows", 1):
- psdf: DataFrame = DataFrame(sdf)
- psdf.columns = pd.Index(where)
- return first_series(psdf.transpose()).rename(self.name)
+ if len(where) == len(set(where)) and not isinstance(index_type,
TimestampType):
+ psdf: DataFrame = DataFrame(sdf)
+ psdf.columns = pd.Index(where)
+ return first_series(psdf.transpose()).rename(self.name)
+ else:
+ # If `where` has duplicate items, leverage the pandas directly
Review comment:
@HyukjinKwon . IMHO , this may not be very uncommon case . Is there a
reason to not handle this case and throw exception
cc @itholic
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]