Re: Spark Interpreter error: 'not found: type'

2018-03-19 Thread Jeff Zhang
I try it in master branch, it looks like it fails to download the
dependencies. and it also fails when I try use spark-submit directly.  It
should not be a zeppelin issue, please check these 2 dependencies.

Exception in thread "main" java.lang.RuntimeException: problem during
retrieve of org.apache.spark#spark-submit-parent:
java.lang.RuntimeException: Multiple artifacts of the module
org.bytedeco.javacpp-presets#openblas;0.2.19-1.3 are retrieved to the same
file! Update the retrieve pattern to fix this error. at
org.apache.ivy.core.retrieve.RetrieveEngine.retrieve(RetrieveEngine.java:249)
at
org.apache.ivy.core.retrieve.RetrieveEngine.retrieve(RetrieveEngine.java:83)
at org.apache.ivy.Ivy.retrieve(Ivy.java:551) at
org.apache.spark.deploy.SparkSubmitUtils$.resolveMavenCoordinates(SparkSubmit.scala:1200)
at
org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:304)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:153) at
org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119) at
org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by:
java.lang.RuntimeException: Multiple artifacts of the module
org.bytedeco.javacpp-presets#openblas;0.2.19-1.3 are retrieved to the same
file! Update the retrieve pattern to fix this error. at
org.apache.ivy.core.retrieve.RetrieveEngine.determineArtifactsToCopy(RetrieveEngine.java:417)
at
org.apache.ivy.core.retrieve.RetrieveEngine.retrieve(RetrieveEngine.java:118)
... 7 more at
org.apache.zeppelin.interpreter.remote.RemoteInterpreterManagedProcess.start(RemoteInterpreterManagedProcess.java:205)
at
org.apache.zeppelin.interpreter.ManagedInterpreterGroup.getOrCreateInterpreterProcess(ManagedInterpreterGroup.java:65)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getOrCreateInterpreterProcess(RemoteInterpreter.java:105)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:158)
at
org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:126)

Marcus 于2018年3月20日周二 上午8:01写道:

> Hi Karan,
>
> thanks for your hint, and sorry for the late response. I've tried the
> import using _root_ as suggested on stackoverflow, but it didn't change
> anything. Also, the import statement runs. The error occurs when using the
> classname.
>
> As for datavec-api, it is a transient dependency of deeplearning4j-core,
> which is loaded using %spark.dep. I also added it to the
> interpreter-settings as a dependency, with no different effect.
>
> Regards, Marcus
>
> On Wed, Mar 14, 2018 at 1:56 PM, Karan Sewani 
> wrote:
>
>> Hello Marcus
>>
>>
>> Maybe it has something to do with
>>
>>
>> https://stackoverflow.com/questions/13008792/how-to-import-class-using-fully-qualified-name
>> https://stackoverflow.com/questions/13008792/how-to-import-class-using-fully-qualified-name
>>
>>
>>
>> I
>> have implemented user defined functions in spark and used them in my code
>> with jar being loaded in classpath and i didn't have any issues with import.
>>
>>
>> Can you give me idea of how you are loading this jar datavec-api for
>> zeppelin or spark-submit to access?
>>
>>
>> Best
>>
>> Karan
>> --
>> *From:* Marcus 
>> *Sent:* Saturday, March 10, 2018 10:43:25 AM
>> *To:* users@zeppelin.apache.org
>> *Subject:* Spark Interpreter error: 'not found: type'
>>
>> Hi,
>>
>> I am new to Zeppelin and encountered a strange behavior. When copying my
>> running scala-code to a notebook, I've got errors from the spark
>> interpreter, saying it could not find some types. Strangely the code
>> worked, when I used the fqcn instead of the simple name.
>> But since I want the create a workflow for me, where I use my IDE to
>> write scala and transfer it to a notebook, I'd prefer to not be forced to
>> using fqcn.
>>
>> Here's an example:
>>
>>
>> | %spark.dep
>> | z.reset()
>> | z.load("org.deeplearning4j:deeplearning4j-core:0.9.1")
>> | z.load("org.nd4j:nd4j-native-platform:0.9.1")
>>
>> res0: org.apache.zeppelin.dep.Dependency =
>> org.apache.zeppelin.dep.Dependency@2e10d1e4
>>
>> | import org.datavec.api.records.reader.impl.FileRecordReader
>> |
>> | class Test extends FileRecordReader {
>> | }
>> |
>> | val t = new Test()
>>
>> import org.datavec.api.records.reader.impl.FileRecordReader
>> :12: error: not found: type FileRecordReader
>> class Test extends FileRecordReader {
>>
>> Thanks, Marcus
>>
>
>


Re: Spark Interpreter error: 'not found: type'

2018-03-19 Thread Marcus
Hi Karan,

thanks for your hint, and sorry for the late response. I've tried the
import using _root_ as suggested on stackoverflow, but it didn't change
anything. Also, the import statement runs. The error occurs when using the
classname.

As for datavec-api, it is a transient dependency of deeplearning4j-core,
which is loaded using %spark.dep. I also added it to the
interpreter-settings as a dependency, with no different effect.

Regards, Marcus

On Wed, Mar 14, 2018 at 1:56 PM, Karan Sewani 
wrote:

> Hello Marcus
>
>
> Maybe it has something to do with
>
> https://stackoverflow.com/questions/13008792/how-to-
> import-class-using-fully-qualified-namehttps://
> stackoverflow.com/questions/13008792/how-to-import-class-
> using-fully-qualified-name
>
>
>
> I
> have implemented user defined functions in spark and used them in my code
> with jar being loaded in classpath and i didn't have any issues with import.
>
>
> Can you give me idea of how you are loading this jar datavec-api for
> zeppelin or spark-submit to access?
>
>
> Best
>
> Karan
> --
> *From:* Marcus 
> *Sent:* Saturday, March 10, 2018 10:43:25 AM
> *To:* users@zeppelin.apache.org
> *Subject:* Spark Interpreter error: 'not found: type'
>
> Hi,
>
> I am new to Zeppelin and encountered a strange behavior. When copying my
> running scala-code to a notebook, I've got errors from the spark
> interpreter, saying it could not find some types. Strangely the code
> worked, when I used the fqcn instead of the simple name.
> But since I want the create a workflow for me, where I use my IDE to write
> scala and transfer it to a notebook, I'd prefer to not be forced to using
> fqcn.
>
> Here's an example:
>
>
> | %spark.dep
> | z.reset()
> | z.load("org.deeplearning4j:deeplearning4j-core:0.9.1")
> | z.load("org.nd4j:nd4j-native-platform:0.9.1")
>
> res0: org.apache.zeppelin.dep.Dependency = org.apache.zeppelin.dep.
> Dependency@2e10d1e4
>
> | import org.datavec.api.records.reader.impl.FileRecordReader
> |
> | class Test extends FileRecordReader {
> | }
> |
> | val t = new Test()
>
> import org.datavec.api.records.reader.impl.FileRecordReader
> :12: error: not found: type FileRecordReader
> class Test extends FileRecordReader {
>
> Thanks, Marcus
>


Re: "IPython is available, use IPython for PySparkInterpreter"

2018-03-19 Thread andreas . weise
I already filed an issue: Checkout 
https://issues.apache.org/jira/browse/ZEPPELIN-3290

Jeff Zhang wanted to wait for other users feedback there.

On 2018/03/19 18:10:48, Ruslan Dautkhanov  wrote: 
> We're getting " IPython is available, use IPython for PySparkInterpreter "
> warning each time we start %pyspark notebooks.
> 
> Although there is no difference between %pyspark and %ipyspark afaik.
> At least we can use all ipython magic commands etc.
> (maybe becase we have zeppelin.pyspark.useIPython=true?)
> 
> If that's the case, how we can disable "IPython is available, use IPython
> for PySparkInterpreter" warning ?
> 
> 
> -- 
> Ruslan Dautkhanov
> 


"IPython is available, use IPython for PySparkInterpreter"

2018-03-19 Thread Ruslan Dautkhanov
We're getting " IPython is available, use IPython for PySparkInterpreter "
warning each time we start %pyspark notebooks.

Although there is no difference between %pyspark and %ipyspark afaik.
At least we can use all ipython magic commands etc.
(maybe becase we have zeppelin.pyspark.useIPython=true?)

If that's the case, how we can disable "IPython is available, use IPython
for PySparkInterpreter" warning ?


-- 
Ruslan Dautkhanov