When running Scala as the default interpreter in a notebook:

Cell 1:
val x = 3
kernel.data.put("x", x)

Cell2:
%%pyspark
x = kernel._jvm_kernel.data().get("x")
kernel._jvm_kernel.data().put("x", x + 1)

Cell3:
println(kernel.data.get("x"))

On Wed, Nov 2, 2016 at 11:27 AM <[email protected]> wrote:

> Thanks Chip, now, I understand how to work with it from the JVM side. Any
> chance you have a snippet of how to get a value from the map in python?
>
> Ian Maloney
> Platform Architect
> Advanced Analytics
> Internal: 828716
> Office: (734) 623-8716
> Mobile: (313) 910-9272
>
>
>
>
>
>
>
>
> On 11/2/16, 11:39 AM, "Chip Senkbeil" <[email protected]> wrote:
>
> >While it isn't supported (we don't test its use in this case), you can
> >store objects in a shared hashmap under the kernel object that is made
> >available in each interpreter. The map is exposed as `kernel.data`, but
> >the
> >way you access and store data is different per language.
> >
> >The signature of the data map on the kernel is `val data: java.util.Map[
> >String, Any]` and we use a concurrent hashmap, so it can handle being
> >accessed from different threads.
> >
> >On Wed, Nov 2, 2016 at 10:28 AM <[email protected]> wrote:
> >
> >> Hi,
> >>
> >> I¹m working primarily using the default scala/spark interpreter. It
> >>works
> >> great, except when I need to plot something. Is there a way I can take a
> >> scala object or spark data frame I¹ve created in a scala cell and pass
> >>it
> >> off to a pyspark cell for plotting?
> >>
> >> This documentation issue, might be related. I¹d be happy to try to
> >> document this once I know how :)
> >>
> >>
> >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_ji
> >>ra_browse_TOREE-2D286&d=DQIFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156R-JJ
> >>U10&r=CxpqDYMuQy-1uNI-UOyUbaX6BMPCZXH8d8evuCoP_OA&m=R6uBtgqaKfK_uE0gD6eDj
> >>TZkYHrLjqtC0H66BkHvmVs&s=bRCGzWGCFGO54j-onzac_6v61jjY41QMDA1QQ_qLySQ&e=
> >>
> >> Thanks!
> >>
> >> Ian
> >>
> >>
>
>

Reply via email to