[ 
https://issues.apache.org/jira/browse/SPARK-34925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17314391#comment-17314391
 ] 

Hyukjin Kwon commented on SPARK-34925:
--------------------------------------

As the exception indicates, it fails during S3 initilization that is Hadoop 
library. It's either (likely) your env issue or an issue at Hadoop.

> Spark shell failed to access external file to read content
> ----------------------------------------------------------
>
>                 Key: SPARK-34925
>                 URL: https://issues.apache.org/jira/browse/SPARK-34925
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Shell
>    Affects Versions: 3.0.0
>            Reporter: czh
>            Priority: Major
>         Attachments: 微信截图_20210401093951.png
>
>
> Spark shell failed to access external file to read content
> Spark version 3.0, Hadoop version 3.2
> Start spark normally, enter spark shell and enter Val B1= sc.textFile ("S3a: 
> / / spark test / a.txt") is normal. When entering B1. Collect(). Foreach 
> (println) to traverse and read the contents of the file, an error is reported
> The error message is: java.lang.NumberFormatException : For input string: 
> "100M"



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to