That sounds like a problem between Py4J and Hadoop, or maybe pyspark. 
There's not a single appearance of anything from Jupyter in either the code 
or the error message you posted. I doubt that you will find much help for 
that problem in a Jupyter forum. Have you reached out to the Hadoop and/or 
Spark communities yet?

One possible explanation is that the kernel might be missing Spark 
configuration. Or that "findspark" initializes a local Spark instance, 
whereas you would want it to connect to the cluster you have set up. Or 
that the former leads to the latter. But you'll need advice from people 
with Spark skills, rather than Jupyter skills, to figure that out.

cheers,
  Roland

-- 
You received this message because you are subscribed to the Google Groups 
"Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jupyter/270625f9-0a11-4fa0-a4c2-4c8da81cb646%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to