Hi,

I am new to Zeppelin and encountered a strange behavior. When copying my
running scala-code to a notebook, I've got errors from the spark
interpreter, saying it could not find some types. Strangely the code
worked, when I used the fqcn instead of the simple name.
But since I want the create a workflow for me, where I use my IDE to write
scala and transfer it to a notebook, I'd prefer to not be forced to using
fqcn.

Here's an example:


| %spark.dep
| z.reset()
| z.load("org.deeplearning4j:deeplearning4j-core:0.9.1")
| z.load("org.nd4j:nd4j-native-platform:0.9.1")

res0: org.apache.zeppelin.dep.Dependency =
org.apache.zeppelin.dep.Dependency@2e10d1e4

| import org.datavec.api.records.reader.impl.FileRecordReader
|
| class Test extends FileRecordReader {
| }
|
| val t = new Test()

import org.datavec.api.records.reader.impl.FileRecordReader
<console>:12: error: not found: type FileRecordReader
class Test extends FileRecordReader {

Thanks, Marcus

Reply via email to