Github user JoshRosen commented on the pull request:

    https://github.com/apache/spark/pull/2712#issuecomment-58760950
  
    It looks like the original implementation of this converter was added in 
2604939f643bca125f5e2fb53e3221202996d41b, all the way back in 2011, so I 
believe that this would affect every released version of Spark.  How does this 
error manifest itself in the wild?  Does it lead to silent corruption when 
reading / writing binary data?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to