Github user mridulm commented on the pull request:

    https://github.com/apache/spark/pull/1780#issuecomment-51269524
  
    Hi @pwendell, my observation about buffer size was not in context of spark 
... we saw issues which "looked like" buffer overflow when the serialized 
object graph was large, and it was not handling the buffer growth properly.
    Fortunately, this was due to a bug in our code to begin with (object being 
serialized was holding unrequired reference to a large graph of objects - 
running into an mb or so) : and so did not need to pursue it much.
    But having seen something which should have been handled anyway, I want to 
make sure that changing the default does not cause surprises to our users.
    
    If there are issues with buffer growth, and we lower the limit, a lot of 
jobs will start failing on release.
    
    Given some of the past bugs we have fixed @pwendell (the flush issue comes 
to mind for example !), I am very wary of kryo - when it works, it is great, 
rest is suspicious :-)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to