Github user Tagar commented on the issue:
    Thanks for the heads up, yep I figured out that I have to tune up 
zeppelin.ipython.grpc.framesize to a large number. 
    I looked over the PR. Two quick suggestions
    1) Would it be possible to make spark interpreter keep and not close the 
stream if such an exception happens? We can see a higher limit, but I am sure 
users will have cases when they will try to go higher. The Spark interpreter is 
then in a bad state and only way to fix this is to try increase a limit again.. 
Not sure if this problem belongs to Zeppelin or to grpc, so provisionally 
opened an issue in grpc too -
    2) Should we increase the default? .. 4Mb isn't that hard to hit when 
ipython returns a mid-size dataset / table.


Reply via email to