Github user rgbkrk commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20373#discussion_r164341489
  
    --- Diff: python/pyspark/cloudpickle.py ---
    @@ -1019,18 +948,40 @@ def __reduce__(cls):
             return cls.__name__
     
     
    -def _fill_function(func, globals, defaults, dict, module, closure_values):
    -    """ Fills in the rest of function data into the skeleton function 
object
    -        that were created via _make_skel_func().
    +def _fill_function(*args):
    +    """Fills in the rest of function data into the skeleton function object
    +
    +    The skeleton itself is create by _make_skel_func().
    --- End diff --
    
    That's more of an issue if using pickles stored on disk or if nodes in the 
cluster are on different versions. Is that likely for Spark?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to