Yikun commented on code in PR #36353:
URL: https://github.com/apache/spark/pull/36353#discussion_r920652980


##########
python/pyspark/pandas/frame.py:
##########
@@ -8412,7 +8430,11 @@ def update(self, other: "DataFrame", join: str = "left", 
overwrite: bool = True)
             *HIDDEN_COLUMNS,
         )
         internal = self._internal.with_new_sdf(sdf, data_fields=data_fields)
-        self._update_internal_frame(internal, requires_same_anchor=False)
+        # Since Spark 3.4, df.update generates a new dataframe instead of 
operating
+        # in-place to follow pandas v1.4 behavior, see also SPARK-38946.
+        self._update_internal_frame(
+            internal, requires_same_anchor=False, anchor_force_disconnect=True

Review Comment:
   See https://github.com/apache/spark/pull/36353#issuecomment-1178956945 , 
after confirm from panda community, we only add `setitem` make copy. so this 
line had been removed



##########
python/pyspark/pandas/frame.py:
##########
@@ -11944,7 +11966,11 @@ def eval_func(pdf):  # type: ignore[no-untyped-def]
         if inplace:
             # Here, the result is always a frame because the error is thrown 
during schema inference
             # from pandas.
-            self._update_internal_frame(result._internal, 
requires_same_anchor=False)
+            # Since Spark 3.4, eval with inplace generates a new dataframe 
instead of operating
+            # in-place to follow pandas v1.4 behavior, see also SPARK-38946.
+            self._update_internal_frame(
+                result._internal, requires_same_anchor=False, 
anchor_force_disconnect=True

Review Comment:
   See https://github.com/apache/spark/pull/36353#issuecomment-1178956945 , 
after confirm from panda community, we only add `setitem` make copy. so this 
line had been removed



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to