Chip, you’re right, this did the trick:

%%pyspark
print kernel.data().get(“x")



Thanks so much for the help!







On 11/2/16, 1:26 PM, "Chip Senkbeil" <[email protected]> wrote:

>I just did that using the RC3 version of Toree for the 0.1.x branch. If
>you're on master, maybe it doesn't require _jvm_kernel. I just saw that
>was
>needed for our RC3.
>
>On Wed, Nov 2, 2016 at 12:12 PM <[email protected]> wrote:
>
>> That is not working for me in the release I have 0.1.0…
>>
>> %%pyspark
>> print dir(kernel._jvm_kernel)
>>
>>
>> ['__call__', '__class__', '__delattr__', '__dict__', '__doc__',
>> '__format__', '__getattribute__', '__hash__', '__init__', '__module__',
>> '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__',
>> '__sizeof__', '__str__', '__subclasshook__', '__weakref__', '_get_args',
>> 'command_header', 'container', 'converters', 'gateway_client', 'name',
>> 'pool', 'target_id’]
>>
>>
>> Any ideas?
>>
>> Thanks,
>> Ian
>>
>>
>>
>>
>>
>>
>>
>> On 11/2/16, 12:38 PM, "Chip Senkbeil" <[email protected]> wrote:
>>
>> >When running Scala as the default interpreter in a notebook:
>> >
>> >Cell 1:
>> >val x = 3
>> >kernel.data.put("x", x)
>> >
>> >Cell2:
>> >%%pyspark
>> >x = kernel._jvm_kernel.data().get("x")
>> >kernel._jvm_kernel.data().put("x", x + 1)
>> >
>> >Cell3:
>> >println(kernel.data.get("x"))
>> >
>> >On Wed, Nov 2, 2016 at 11:27 AM <[email protected]> wrote:
>> >
>> >> Thanks Chip, now, I understand how to work with it from the JVM side.
>> >>Any
>> >> chance you have a snippet of how to get a value from the map in
>>python?
>> >>
>> >> Ian
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On 11/2/16, 11:39 AM, "Chip Senkbeil" <[email protected]>
>>wrote:
>> >>
>> >> >While it isn't supported (we don't test its use in this case), you
>>can
>> >> >store objects in a shared hashmap under the kernel object that is
>>made
>> >> >available in each interpreter. The map is exposed as `kernel.data`,
>>but
>> >> >the
>> >> >way you access and store data is different per language.
>> >> >
>> >> >The signature of the data map on the kernel is `val data:
>> >>java.util.Map[
>> >> >String, Any]` and we use a concurrent hashmap, so it can handle
>>being
>> >> >accessed from different threads.
>> >> >
>> >> >On Wed, Nov 2, 2016 at 10:28 AM <[email protected]>
>>wrote:
>> >> >
>> >> >> Hi,
>> >> >>
>> >> >> I¹m working primarily using the default scala/spark interpreter.
>>It
>> >> >>works
>> >> >> great, except when I need to plot something. Is there a way I can
>> >>take a
>> >> >> scala object or spark data frame I¹ve created in a scala cell and
>> >>pass
>> >> >>it
>> >> >> off to a pyspark cell for plotting?
>> >> >>
>> >> >> This documentation issue, might be related. I¹d be happy to try to
>> >> >> document this once I know how :)
>> >> >>
>> >> >>
>> >> >>
>> >>
>> >>
>> 
>>https://urldefense.proofpoint.com/v2/url?u=https-3A__issues.apache.org_ji
>> >>
>> 
>>>>>>ra_browse_TOREE-2D286&d=DQIFaQ&c=nulvIAQnC0yOOjC0e0NVa8TOcyq9jNhjZ156
>>>>>>R-
>> >>>>JJ
>> >>
>> 
>>>>>>U10&r=CxpqDYMuQy-1uNI-UOyUbaX6BMPCZXH8d8evuCoP_OA&m=R6uBtgqaKfK_uE0gD
>>>>>>6e
>> >>>>Dj
>> >>
>> 
>>>>>>TZkYHrLjqtC0H66BkHvmVs&s=bRCGzWGCFGO54j-onzac_6v61jjY41QMDA1QQ_qLySQ&
>>>>>>e=
>> >> >>
>> >> >> Thanks!
>> >> >>
>> >> >> Ian
>> >> >>
>> >> >>
>> >>
>> >>
>>
>>

Reply via email to