Thanks all for the hints and insights. I just got it up an running
(basically).
@Roland: Your concerns are right but we just use a docker run -v mounted
host filesystem/directory (otherwise one could try to work with sshfs but I
have not tested this).
So my way of doing (using https://pypi.python.org/pypi/remote_ikernel)
looks like:
Dockerfile:
FROM jupyter/datascience-notebook
USER root
RUN apt-get -y update
RUN apt-get -y install openssh-client
USER jovyan
RUN jupyter labextension install @jupyterlab/google-drive
RUN pip install remote_ikernel
On Host:
docker run -it --rm \
-p 8888:8888 \
--add-host="host-machine:$(ipconfig getifaddr en0)" \
-v $(pwd):/home/jovyan/work \
image-name \
start.sh /bin/bash
establish password less ssh tunnelling to host:
jovyan@46dcdfa9fd9b:
ssh-keygen -t rsa
<ret -> no password>
cat .ssh/id_rsa.pub | ssh xxxx@host-machine 'cat >> .ssh/authorized_keys'
<password>
on hostmachine obtain info about the kernel with jupyter kernelspec list
--json and put it to:
jovyan@46dcdfa9fd9b:~$ remote_ikernel manage --add
--kernel_cmd="xxxxxxxxxxxxxxx/matlab-jupyter/bin/python -m matlab_kernel -f
{connection_file}" --name="matlab-on-host" --interface=ssh
--host=xxxxx@host-machine --workdir='xxxxxxxxxxxxx-environment'
--language=matlab
jovyan@46dcdfa9fd9b:~$ jupyter lab --ip='*' --port=8888 --no-browser
copy the url from the output to your host browser matlab kernel appears in
jupiter lab - tested OK :-)
ToDo:
ssh and remote_ikernel manage stuff can't be done in the Dockerfile because
the Host IP is known at build time. I must find a way to automatise this.
A jupiter lab extension would be great that scans remote hosts for kernels.
Then connect to kernels on demand. @ MinRK: Is something like this planned
or ongoing?
Cheers,
Karsten
On Thursday, September 7, 2017 at 8:31:42 AM UTC+2, Roland Weber wrote:
>
> Hello Karsten,
>
> one problem you will encounter when using remote kernels is that the
> kernel doesn't have access to the file system you're seeing in Jupyter Lab.
> That's almost sure to confuse users. They can edit files in their browser,
> but then the kernel doesn't see them. The notebook files will work fine,
> because they're interpreted by Jupyter and only code snippets are sent for
> execution to the kernel. But if there are data files, or libraries you'd
> like to import from the kernel, they cannot be read.
>
> Another potential problem are libraries that come with a notebook
> extension, like ipywidgets with widgetsnbextension. One piece has to be
> installed in the remote kernel, the other in Jupyter. As long as you
> control both environments, you can keep versions in sync. When you control
> only the Jupyter side, you have to rely on someone else on the remote side
> to sync with what you're doing.
>
> So no, there isn't a standard way or best practice for using remote
> kernels. Afaict, the default UI for Jupyter Lab and Notebooks implicitly
> assumes that kernels are local, and that's true in most cases. Installation
> instructions for extension packages make the same assumption. You can get
> remote kernels to work, but the user experience will suffer.
>
> Jupyter Kernel Gateway is one way to provide remote kernels. There's a
> demo called "nb2kg" that shows how to run a notebook server with all
> kernels being remote on one gateway. I'm not aware of a KG demo that would
> mix local and remote kernels, nor of an adaptation for Jupyter Lab.
> https://github.com/jupyter/kernel_gateway_demos/tree/master/nb2kg
>
> cheers,
> Roland
>
--
You received this message because you are subscribed to the Google Groups
"Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/jupyter/93a8129a-e48e-41a7-9e4a-3d342966cab8%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.