Github user Tagar commented on the issue:
    1) My main point was that this exception should be thrown to the user, so 
he or she has a chance to increase this limit. Currently if it breaks, only way 
to find out about this limitation is to enable debugging and not a lot of users 
can do that. 
    2) You're right .. it's 200M not sure how that user got that much data. 
That wasn't from my code, but from a colleague of mine. I guess it was a larger 
table of data. Would you mine making default somewhere in the range 16-32M? I 
think a lot of folks would run into the 4M limit.
    3) Also, it would be great if IPythonInterpreter would catch exceptions 
better. Found another problem - - unrelated to this one, 
but it also shows the same symptoms to the user - Spark interpreter just 
becomes irresponsive.


Reply via email to