[ 
https://issues.apache.org/jira/browse/SPARK-22077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172889#comment-16172889
 ] 

Sean Owen commented on SPARK-22077:
-----------------------------------

I'm not sure whether it's that IPv6 doesn't really work here in general, or 
just the parsing is overly aggressive. You can see the code that parses the 
URI, which parses successfully, but then it's not happy that it lacks a host or 
port or name or something.

> RpcEndpointAddress fails to parse spark URL if it is an ipv6 address.
> ---------------------------------------------------------------------
>
>                 Key: SPARK-22077
>                 URL: https://issues.apache.org/jira/browse/SPARK-22077
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy
>    Affects Versions: 2.0.0
>            Reporter: Eric Vandenberg
>            Priority: Minor
>
> RpcEndpointAddress fails to parse spark URL if it is an ipv6 address.
> For example, 
> sparkUrl = "spark://HeartbeatReceiver@2401:db00:2111:40a1:face:0:21:0:35243"
> is parsed as:
> host = null
> port = -1
> name = null
> While sparkUrl = spark://HeartbeatReceiver@localhost:55691 is parsed properly.
> This is happening on our production machines and causing spark to not start 
> up.
> org.apache.spark.SparkException: Invalid Spark URL: 
> spark://HeartbeatReceiver@2401:db00:2111:40a1:face:0:21:0:35243
>       at 
> org.apache.spark.rpc.RpcEndpointAddress$.apply(RpcEndpointAddress.scala:65)
>       at 
> org.apache.spark.rpc.netty.NettyRpcEnv.asyncSetupEndpointRefByURI(NettyRpcEnv.scala:133)
>       at org.apache.spark.rpc.RpcEnv.setupEndpointRefByURI(RpcEnv.scala:88)
>       at org.apache.spark.rpc.RpcEnv.setupEndpointRef(RpcEnv.scala:96)
>       at org.apache.spark.util.RpcUtils$.makeDriverRef(RpcUtils.scala:32)
>       at org.apache.spark.executor.Executor.<init>(Executor.scala:121)
>       at 
> org.apache.spark.scheduler.local.LocalEndpoint.<init>(LocalSchedulerBackend.scala:59)
>       at 
> org.apache.spark.scheduler.local.LocalSchedulerBackend.start(LocalSchedulerBackend.scala:126)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:173)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:507)
>       at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2283)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:833)
>       at 
> org.apache.spark.sql.SparkSession$Builder$$anonfun$8.apply(SparkSession.scala:825)
>       at scala.Option.getOrElse(Option.scala:121)
>       at 
> org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:825)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to