[jira] [Assigned] (SPARK-33618) hadoop-aws doesn't work
[ https://issues.apache.org/jira/browse/SPARK-33618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hyukjin Kwon reassigned SPARK-33618: Assignee: Dongjoon Hyun > hadoop-aws doesn't work > --- > > Key: SPARK-33618 > URL: https://issues.apache.org/jira/browse/SPARK-33618 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.1.0 >Reporter: Dongjoon Hyun >Assignee: Dongjoon Hyun >Priority: Blocker > > According to > [HADOOP-16080](https://issues.apache.org/jira/browse/HADOOP-16080) since > Apache Hadoop 3.1.1, `hadoop-aws` doesn't work with `hadoop-client-api`. In > other words, the regression is that `dev/make-distribution.sh -Phadoop-cloud > ...` doesn't make a complete distribution for cloud support. It fails at > write operation like the following. > {code} > $ bin/spark-shell --conf spark.hadoop.fs.s3a.access.key=$AWS_ACCESS_KEY_ID > --conf spark.hadoop.fs.s3a.secret.key=$AWS_SECRET_ACCESS_KEY > 20/11/30 23:01:24 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Using Spark's default log4j profile: > org/apache/spark/log4j-defaults.properties > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context available as 'sc' (master = local[*], app id = > local-1606806088715). > Spark session available as 'spark'. > Welcome to > __ > / __/__ ___ _/ /__ > _\ \/ _ \/ _ `/ __/ '_/ >/___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT > /_/ > Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_272) > Type in expressions to have them evaluated. > Type :help for more information. > scala> spark.read.parquet("s3a://dongjoon/users.parquet").show > 20/11/30 23:01:34 WARN MetricsConfig: Cannot locate configuration: tried > hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties > +--+--++ > | name|favorite_color|favorite_numbers| > +--+--++ > |Alyssa| null| [3, 9, 15, 20]| > | Ben| red| []| > +--+--++ > scala> Seq(1).toDF.write.parquet("s3a://dongjoon/out.parquet") > 20/11/30 23:02:14 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)/ > 1] > java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-33618) hadoop-aws doesn't work
[ https://issues.apache.org/jira/browse/SPARK-33618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-33618: Assignee: (was: Apache Spark) > hadoop-aws doesn't work > --- > > Key: SPARK-33618 > URL: https://issues.apache.org/jira/browse/SPARK-33618 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.1.0 >Reporter: Dongjoon Hyun >Priority: Blocker > > According to > [HADOOP-16080](https://issues.apache.org/jira/browse/HADOOP-16080) since > Apache Hadoop 3.1.1, `hadoop-aws` doesn't work with `hadoop-client-api`. In > other words, the regression is that `dev/make-distribution.sh -Phadoop-cloud > ...` doesn't make a complete distribution for cloud support. It fails at > write operation like the following. > {code} > $ bin/spark-shell --conf spark.hadoop.fs.s3a.access.key=$AWS_ACCESS_KEY_ID > --conf spark.hadoop.fs.s3a.secret.key=$AWS_SECRET_ACCESS_KEY > 20/11/30 23:01:24 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Using Spark's default log4j profile: > org/apache/spark/log4j-defaults.properties > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context available as 'sc' (master = local[*], app id = > local-1606806088715). > Spark session available as 'spark'. > Welcome to > __ > / __/__ ___ _/ /__ > _\ \/ _ \/ _ `/ __/ '_/ >/___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT > /_/ > Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_272) > Type in expressions to have them evaluated. > Type :help for more information. > scala> spark.read.parquet("s3a://dongjoon/users.parquet").show > 20/11/30 23:01:34 WARN MetricsConfig: Cannot locate configuration: tried > hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties > +--+--++ > | name|favorite_color|favorite_numbers| > +--+--++ > |Alyssa| null| [3, 9, 15, 20]| > | Ben| red| []| > +--+--++ > scala> Seq(1).toDF.write.parquet("s3a://dongjoon/out.parquet") > 20/11/30 23:02:14 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)/ > 1] > java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org
[jira] [Assigned] (SPARK-33618) hadoop-aws doesn't work
[ https://issues.apache.org/jira/browse/SPARK-33618?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Apache Spark reassigned SPARK-33618: Assignee: Apache Spark > hadoop-aws doesn't work > --- > > Key: SPARK-33618 > URL: https://issues.apache.org/jira/browse/SPARK-33618 > Project: Spark > Issue Type: Bug > Components: Spark Core >Affects Versions: 3.1.0 >Reporter: Dongjoon Hyun >Assignee: Apache Spark >Priority: Blocker > > According to > [HADOOP-16080](https://issues.apache.org/jira/browse/HADOOP-16080) since > Apache Hadoop 3.1.1, `hadoop-aws` doesn't work with `hadoop-client-api`. In > other words, the regression is that `dev/make-distribution.sh -Phadoop-cloud > ...` doesn't make a complete distribution for cloud support. It fails at > write operation like the following. > {code} > $ bin/spark-shell --conf spark.hadoop.fs.s3a.access.key=$AWS_ACCESS_KEY_ID > --conf spark.hadoop.fs.s3a.secret.key=$AWS_SECRET_ACCESS_KEY > 20/11/30 23:01:24 WARN NativeCodeLoader: Unable to load native-hadoop library > for your platform... using builtin-java classes where applicable > Using Spark's default log4j profile: > org/apache/spark/log4j-defaults.properties > Setting default log level to "WARN". > To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use > setLogLevel(newLevel). > Spark context available as 'sc' (master = local[*], app id = > local-1606806088715). > Spark session available as 'spark'. > Welcome to > __ > / __/__ ___ _/ /__ > _\ \/ _ \/ _ `/ __/ '_/ >/___/ .__/\_,_/_/ /_/\_\ version 3.1.0-SNAPSHOT > /_/ > Using Scala version 2.12.10 (OpenJDK 64-Bit Server VM, Java 1.8.0_272) > Type in expressions to have them evaluated. > Type :help for more information. > scala> spark.read.parquet("s3a://dongjoon/users.parquet").show > 20/11/30 23:01:34 WARN MetricsConfig: Cannot locate configuration: tried > hadoop-metrics2-s3a-file-system.properties,hadoop-metrics2.properties > +--+--++ > | name|favorite_color|favorite_numbers| > +--+--++ > |Alyssa| null| [3, 9, 15, 20]| > | Ben| red| []| > +--+--++ > scala> Seq(1).toDF.write.parquet("s3a://dongjoon/out.parquet") > 20/11/30 23:02:14 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 2)/ > 1] > java.lang.NoSuchMethodError: > org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org