Samuel Alexander created ZEPPELIN-327:
-----------------------------------------

             Summary: Accessing S3 fails with java.lang.verifyError
                 Key: ZEPPELIN-327
                 URL: https://issues.apache.org/jira/browse/ZEPPELIN-327
             Project: Zeppelin
          Issue Type: Bug
          Components: Core, Interpreters
    Affects Versions: 0.6.0
         Environment: Zeppelin built from github, Spark 1.4 and Hadoop 2.6.0
            Reporter: Samuel Alexander
             Fix For: 0.6.0


I am trying to do some basic analytics using Spark and Zeppelin.

I've set up the spark cluster using the steps present in [spark-ec2] 
(http://spark.apache.org/docs/latest/ec2-scripts.html) Also I've set up the 
zeppelin in EC2 using the steps present in this [blog] 
(http://christopher5106.github.io/big/data/2015/07/03/iPython-Jupyter-Spark-Notebook-and-Zeppelin-comparison-for-big-data-in-scala-and-python-for-spark-clusters.html)

I've add the libraries that I want to use using the below code in zeppelin 
notebook

%dep
z.reset()

// Add spark-csv package
z.load("com.databricks:spark-csv_2.10:1.2.0")

// Add jars required for s3 access
z.load("org.apache.hadoop:hadoop-aws:2.6.0")

And below code is to read CSV files from S3

sc.hadoopConfiguration.set("fs.s3n.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sc.hadoopConfiguration.set("fs.s3n.awsAccessKeyId","XXX")
sc.hadoopConfiguration.set("fs.s3n.awsSecretAccessKey","XXX")

val path = "s3n://XXX/XXX.csv"
val df = sqlContext.read.format("com.databricks.spark.csv").option("header", 
"true").load(path)

I am getting the below exception

java.lang.VerifyError: Bad type on operand stack Exception Details: 
Location: 
org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore.initialize(Ljava/net/URI;Lorg/apache/hadoop/conf/Configuration;)V
 @38: invokespecial 
Reason: Type 'org/jets3t/service/security/AWSCredentials' (current frame, 
stack[3]) is not assignable to 
'org/jets3t/service/security/ProviderCredentials' 

Current Frame: bci: @38 flags: { } 
locals: { 'org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore', 
'java/net/URI', 'org/apache/hadoop/conf/Configuration', 
'org/apache/hadoop/fs/s3/S3Credentials', 
'org/jets3t/service/security/AWSCredentials' } 

stack: { 'org/apache/hadoop/fs/s3native/Jets3tNativeFileSystemStore', 
uninitialized 32, uninitialized 32, 
'org/jets3t/service/security/AWSCredentials' } 
Bytecode: 
0000000: bb00 0259 b700 034e 2d2b 2cb6 0004 bb00 0000010: 0559 2db6 0006 2db6 
0007 b700 083a 042a 0000020: bb00 0959 1904 b700 0ab5 000b a700 0b3a 0000030: 
042a 1904 b700 0d2a 2c12 0e03 b600 0fb5 0000040: 0010 2a2c 1211 1400 12b6 0014 
1400 15b8 0000050: 0017 b500 182a 2c12 1914 0015 b600 1414 0000060: 0015 b800 
17b5 001a 2a2c 121b b600 1cb5 0000070: 001d 2abb 001e 592b b600 1fb7 0020 b500 
0000080: 21b1 
Exception Handler Table: bci [14, 44] => handler: 47 
Stackmap Table: 
full_frame(@47,{Object[#191],Object[#192],Object[#193],Object[#194]},{Object[#195]})
 same_frame(@55) 
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.createDefaultStore(NativeS3FileSystem.java:334)
 
at 
org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:324)
 
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2596) 
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)

        
I've built zeppelin with hadoop 2.6.0 using the command mvn install -DskipTests 
-Dspark.version=1.4.0 -Dhadoop.version=2.6.0 and I am using local Spark


I've also tried by setting Spark home to external Spark instance with spark 
version 1.4 and hadoop version 2.6.0. Still getting the same issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to