Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/2313#issuecomment-56127611
  
    This is a tricky issue.
    
    Exact reproducibility / determinism crops up in two different senses here: 
re-running an entire job and re-computing a lost partition.  Spark's 
lineage-based fault-tolerance is built on the idea that partitions can be 
deterministically recomputed.  Tasks that have dependencies on the external 
environment may violate this determinism property (e.g. by reading the current 
system time to set a random seed).  Workers using different versions of 
libraries which give different results is one way that the environment can leak 
into tasks and make them non-deterministic based on where they're run.
    
    There are some scenarios where exact reproducibility might be desirable.  
Imagine that I train a ML model on some data, make predictions with it, and 
want to go back and understand the lineage that led to that model being 
created.  To do this, I may want to deterministically re-run the job with 
additional internal logging.  This use-case is tricky in general, though: 
details of the execution environment might creep in via other means.  We might 
see different results due to rounding errors / numerical instability if we run 
on environments with different BLAS libraries, etc (I guess we could say 
"deterministic within some rounding error / to _k_ bits of precision).  Exact 
long-term reproducibility of computational results is a hard, unsolved problem 
in general.
    
    /cc @mengxr @jkbradley; since you work on MLlib; what do you think we 
should do here?  Is there any precedent in MLlib and its use of native 
libraries?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to