By "local Spark 2.0 instance", did you mean a standalone cluster on your
local? So you updated the "master", and "deploy-mode" interpreter
I had this problem as well. But I found if you set the
"spark.executor.memory" to "512m", and make sure your machine has
sufficient physical memory, it is less likely to hit this problem.
If this is not your case, please share your ./logs/zeppelin*spark*.log, so
people can help you take a look.
On Fri, Oct 14, 2016 at 11:24 PM soralee <sora0...@nflabs.com> wrote:
> Thanks for your kind answers :)
> I downloaded file through your url link and I've tested.
> This test have executed a "bin/zeppelin.cmd" command in cmd window.
> But, I didn't find your problem.
> Did you get the other error logs?
> This "logs" folder is under zeppelin's home.
> If not, when you execute your spark paragraph(such as "var a=1"), please
> me know the whole error logs.
> View this message in context:
> Sent from the Apache Zeppelin Users (incubating) mailing list mailing list
> archive at Nabble.com.