[
https://issues.apache.org/jira/browse/TOREE-260?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15149510#comment-15149510
]
Kapil Malik commented on TOREE-260:
-----------------------------------
Hi [~lbustelo]
Thanks for the reply. I had previously been using the following settings in
kernel.json -
"argv": [
"/home/hadoop/incubator-toree/dist/toree/bin/run.sh",
"--profile",
"{connection_file}"
],
However, it would open a new set of 5 ports everytime I open a new notebook.
So this time I provided a fixed set of ports, like this -
"argv": [
"/home/hadoop/incubator-toree/dist/toree/bin/run.sh",
"--stdin-port",
"35153",
"--control-port",
"43870",
"--heartbeat-port",
"33136",
"--shell-port",
"48798",
"--iopub-port",
"45141",
"--profile",
"{connection_file}"
],
And I see the logs do show these ports have been included in connection
profile, at startup.
However, now my spark notebook does not seem to be working at all. It simply
hangs on a simple operation. (The asterisk showing on notebook, with no spark
job / stage running in spark UI.)
Please suggest.
> Using same spark context for multiple notebooks
> -----------------------------------------------
>
> Key: TOREE-260
> URL: https://issues.apache.org/jira/browse/TOREE-260
> Project: TOREE
> Issue Type: Wish
> Reporter: Kapil Malik
>
> Hi,
> We are using toree with Jupyter and have a pressing requirement to use same
> spark context for multiple notebooks.
> I know this has been referred in
> https://issues.apache.org/jira/browse/TOREE-211 as well, but I am not clear
> whether this has been resolved, and if yes, how to go about achieving this.
> I am ready to spend active development time on this, but would need some
> shepherding around where to look.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)