Re: Dynamic Dependencies

2017-07-11 Thread moon soo Lee
Thanks for sharing your problem.
For now, only way is clean local-repo to download the artifact again.
Do you mind file a jira issue track this problem?

Thanks,
moon

On Tue, Jul 11, 2017 at 4:04 AM Edgardo Vega  wrote:

> I successfully added a maven snapshot repository and was able to resolve
> the dependencies. Unfortunately I have published new versions to the
> repository and restarted the interpreter yet the new artifact is not being
> pulled in.
>
> I set it up using the following template
>
> z.addRepo("RepoName").url("RepoURL").snapshot()
>
>
>  Is there a way to force the artifact to be downloaded on any updates?
>
>
> --
> Cheers,
>
> Edgardo
>


Re: Showing pandas dataframe with utf8 strings

2017-07-11 Thread Ruslan Dautkhanov
Your example works fine for me too.

We're on Zeppelin snapshot ~2 months old.



-- 
Ruslan Dautkhanov

On Tue, Jul 11, 2017 at 3:11 PM, Ben Vogan  wrote:

> Here is the specific example that is failing:
>
> import pandas
> z.show(pandas.DataFrame([u'Jalape\xf1os.'],[1],['Menu']))
>
> On Tue, Jul 11, 2017 at 2:32 PM, Ruslan Dautkhanov 
> wrote:
>
>> Hi Ben,
>>
>> I can't reproduce this
>>
>> from pyspark.sql.types import *
>>> rdd = sc.parallelize([[u'El Niño']])
>>> df = sqlc.createDataFrame(
>>>   rdd, schema=StructType([StructField("unicode data",
>>> StringType(), True)])
>>> )
>>> df.show()
>>> z.show(df)
>>
>>
>> shows unicode character fine.
>>
>>
>>
>> --
>> Ruslan Dautkhanov
>>
>> On Tue, Jul 11, 2017 at 11:37 AM, Ben Vogan  wrote:
>>
>>> Hi Ruslan,
>>>
>>> I tried adding:
>>>
>>>  export LC_ALL="en_US.utf8"
>>>
>>> To my zeppelin-env.sh script and restarted Zeppelin, but I still have
>>> the same problem.  The print statement:
>>>
>>> python -c "print (u'\xf1')"
>>>
>>> works from the note.  I think the problem is the use of the str
>>> function.  Looking at the stack you can see that the zeppelin code is
>>> calling body_buf.write(str(cell)).  If you call str(u'\xf1') you will get
>>> the error.
>>>
>>> --Ben
>>>
>>> On Tue, Jul 11, 2017 at 10:19 AM, Ruslan Dautkhanov <
>>> dautkha...@gmail.com> wrote:
>>>
 $ env | grep LC
> $
> $ python -c "print (u'\xf1')"
> ñ
>


> $ export LC_ALL="C"
> $ python -c "print (u'\xf1')"
> Traceback (most recent call last):
>   File "", line 1, in 
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
> position 0: ordinal not in range(128)
>


> $ export LC_ALL="en_US.utf8"
> $ python -c "print (u'\xf1')"
> ñ
>


> $ unset LC_ALL
> $ env | grep LC
> $
> $ python -c "print (u'El Ni\xf1o')"
> El Niño


 You could add LC_ALL export to your zeppelin-env.sh script.



 --
 Ruslan Dautkhanov

 On Tue, Jul 11, 2017 at 9:35 AM, Ben Vogan  wrote:

> Hi all,
>
> I am trying to use the zeppelin context to show the contents of a
> pandas DataFrame and getting the following error:
>
> Traceback (most recent call last):
>   File "/tmp/zeppelin_python-7554503996532642522.py", line 278, in
> 
> raise Exception(traceback.format_exc())
> Exception: Traceback (most recent call last):
>   File "/tmp/zeppelin_python-7554503996532642522.py", line 271, in
> 
> exec(code)
>   File "", line 2, in 
>   File "/tmp/zeppelin_python-7554503996532642522.py", line 93, in show
> self.show_dataframe(p, **kwargs)
>   File "/tmp/zeppelin_python-7554503996532642522.py", line 121, in
> show_dataframe
> body_buf.write(str(cell))
> UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
> position 79: ordinal not in range(128)
>
> How do I go about resolving this?
>
> I'm running version 0.7.1 with python 2.7.
>
> Thanks,
>
> --
> *BENJAMIN VOGAN* | Data Platform Team Lead
>
> 
> 
> 
> 
> 
> 
>


>>>
>>>
>>> --
>>> *BENJAMIN VOGAN* | Data Platform Team Lead
>>>
>>> 
>>> 
>>> 
>>>  
>>> 
>>>
>>
>>
>
>
> --
> *BENJAMIN VOGAN* | Data Platform Team Lead
>
> 
>  
>  
> 
>


Re: Showing pandas dataframe with utf8 strings

2017-07-11 Thread Ben Vogan
Here is the specific example that is failing:

import pandas
z.show(pandas.DataFrame([u'Jalape\xf1os.'],[1],['Menu']))

On Tue, Jul 11, 2017 at 2:32 PM, Ruslan Dautkhanov 
wrote:

> Hi Ben,
>
> I can't reproduce this
>
> from pyspark.sql.types import *
>> rdd = sc.parallelize([[u'El Niño']])
>> df = sqlc.createDataFrame(
>>   rdd, schema=StructType([StructField("unicode data",
>> StringType(), True)])
>> )
>> df.show()
>> z.show(df)
>
>
> shows unicode character fine.
>
>
>
> --
> Ruslan Dautkhanov
>
> On Tue, Jul 11, 2017 at 11:37 AM, Ben Vogan  wrote:
>
>> Hi Ruslan,
>>
>> I tried adding:
>>
>>  export LC_ALL="en_US.utf8"
>>
>> To my zeppelin-env.sh script and restarted Zeppelin, but I still have the
>> same problem.  The print statement:
>>
>> python -c "print (u'\xf1')"
>>
>> works from the note.  I think the problem is the use of the str
>> function.  Looking at the stack you can see that the zeppelin code is
>> calling body_buf.write(str(cell)).  If you call str(u'\xf1') you will get
>> the error.
>>
>> --Ben
>>
>> On Tue, Jul 11, 2017 at 10:19 AM, Ruslan Dautkhanov > > wrote:
>>
>>> $ env | grep LC
 $
 $ python -c "print (u'\xf1')"
 ñ

>>>
>>>
 $ export LC_ALL="C"
 $ python -c "print (u'\xf1')"
 Traceback (most recent call last):
   File "", line 1, in 
 UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
 position 0: ordinal not in range(128)

>>>
>>>
 $ export LC_ALL="en_US.utf8"
 $ python -c "print (u'\xf1')"
 ñ

>>>
>>>
 $ unset LC_ALL
 $ env | grep LC
 $
 $ python -c "print (u'El Ni\xf1o')"
 El Niño
>>>
>>>
>>> You could add LC_ALL export to your zeppelin-env.sh script.
>>>
>>>
>>>
>>> --
>>> Ruslan Dautkhanov
>>>
>>> On Tue, Jul 11, 2017 at 9:35 AM, Ben Vogan  wrote:
>>>
 Hi all,

 I am trying to use the zeppelin context to show the contents of a
 pandas DataFrame and getting the following error:

 Traceback (most recent call last):
   File "/tmp/zeppelin_python-7554503996532642522.py", line 278, in
 
 raise Exception(traceback.format_exc())
 Exception: Traceback (most recent call last):
   File "/tmp/zeppelin_python-7554503996532642522.py", line 271, in
 
 exec(code)
   File "", line 2, in 
   File "/tmp/zeppelin_python-7554503996532642522.py", line 93, in show
 self.show_dataframe(p, **kwargs)
   File "/tmp/zeppelin_python-7554503996532642522.py", line 121, in
 show_dataframe
 body_buf.write(str(cell))
 UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
 position 79: ordinal not in range(128)

 How do I go about resolving this?

 I'm running version 0.7.1 with python 2.7.

 Thanks,

 --
 *BENJAMIN VOGAN* | Data Platform Team Lead

 
 
 
  
 

>>>
>>>
>>
>>
>> --
>> *BENJAMIN VOGAN* | Data Platform Team Lead
>>
>> 
>>  
>>  
>> 
>>
>
>


-- 
*BENJAMIN VOGAN* | Data Platform Team Lead


 
 



Showing pandas dataframe with utf8 strings

2017-07-11 Thread Ben Vogan
Hi all,

I am trying to use the zeppelin context to show the contents of a pandas
DataFrame and getting the following error:

Traceback (most recent call last):
  File "/tmp/zeppelin_python-7554503996532642522.py", line 278, in 
raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_python-7554503996532642522.py", line 271, in 
exec(code)
  File "", line 2, in 
  File "/tmp/zeppelin_python-7554503996532642522.py", line 93, in show
self.show_dataframe(p, **kwargs)
  File "/tmp/zeppelin_python-7554503996532642522.py", line 121, in
show_dataframe
body_buf.write(str(cell))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf1' in
position 79: ordinal not in range(128)

How do I go about resolving this?

I'm running version 0.7.1 with python 2.7.

Thanks,

-- 
*BENJAMIN VOGAN* | Data Platform Team Lead


 
 



Re: JDBC use with zeppelin

2017-07-11 Thread darren
Thank you for your response. Very much appreciated!




Get Outlook for Android







From: Ruslan Dautkhanov


Sent: Monday, July 10, 2:29 PM


Subject: Re: JDBC use with zeppelin


To: users






For Oracle JDBC driver we had to feed ojdb7.jar 




into SPARK_SUBMIT_OPTIONS through --jars parameter




and into ZEPPELIN_INTP_CLASSPATH_OVERRIDES, like:






zeppelin-env.sh:






export SPARK_SUBMIT_OPTIONS=". . . --jars /var/lib/sqoop/ojdbc7.jar"


export 
ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf:/var/lib/sqoop/ojdbc7.jar














-- 


Ruslan Dautkhanov






On Mon, Jul 10, 2017 at 12:10 PM,  wrote:




Hi




We want to use a jdbc driver with pyspark through Zeppelin. Not the custom 
interpreter but from sqlContext where we can read into dataframe.




I added the jdbc driver jar to zeppelin spark submit options "--jars" but it 
still says driver class not found.




Does it have to reside somewhere else?




Thanks in advance!








Get Outlook for Android