In my case, I'm adding these lines to zeppelin/conf/zeppelin-env.sh export AWS_ACCESS_KEY_ID="XXXXXXXXXXXXX" export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
On Thu Feb 19 2015 at 7:41:52 AM Raju Uppalapati <[email protected]> wrote: > Hi, > I am using the Zeppelin binaries built from source as of today and > Zeppelin's Spark context is not loading the credentials in my > core-sites.xml. Hence its unable to read/write to S3. > Any one else seeing this problem? Any suggestions/workarounds? > > val lines = sc.textFile("s3n://mys3bucket/myfile").count > java.lang.IllegalArgumentException: > AWS Access Key ID and Secret Access Key must be specified as the username > or password (respectively) of a s3n URL, > or by setting the fs.s3n.awsAccessKeyId or fs.s3n.awsSecretAccessKey > properties (respectively). at > org.apache.hadoop.fs.s3.S3Credentials.initialize(S3Credentials.java:70) at > org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:73) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ... > ... > at > org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:256) > at > org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228) > at > org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304) > at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:201) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at > scala.Option.getOrElse(Option.scala:120) at > org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at > org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:205) at > org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:203) at > scala.Option.getOrElse(Option.scala:120) at > org.apache.spark.rdd.RDD.partitions(RDD.scala:203) at > org.apache.spark.SparkContext.runJob(SparkContext.scala:1328) at > org.apache.spark.rdd.RDD.count(RDD.scala:910) > > thanks, > _raju >
