[ 
https://issues.apache.org/jira/browse/SPARK-11909?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15021186#comment-15021186
 ] 

Jacek Laskowski commented on SPARK-11909:
-----------------------------------------

What about a WARN message about the port in use to connect to a Spark 
Standalone master for users who need less to remember and type like me? It'd be 
a nice time saver. That would at the _very_ least spare the "recommendation" at 
http://spark.apache.org/docs/latest/spark-standalone.html#starting-a-cluster-manually
 which is actually false (as the master doesn't print out the URL to the 
console once started):

_Once started, the master will print out a spark://HOST:PORT URL for itself, 
which you can use to connect workers to it, or pass as the “master” argument to 
SparkContext. You can also find this URL on the master’s web UI, which is 
http://localhost:8080 by default._

> Spark Standalone's master URL accepts URLs without port (assuming default 
> 7077)
> -------------------------------------------------------------------------------
>
>                 Key: SPARK-11909
>                 URL: https://issues.apache.org/jira/browse/SPARK-11909
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.6.0
>            Reporter: Jacek Laskowski
>            Priority: Trivial
>
> It's currently impossible to use {{spark://localhost}} URL for Spark 
> Standalone's master. With the feature supported, it'd be less to know to get 
> started with the mode (and hence improve user friendliness).
> I think no-port master URL should be supported and assume the default port 
> {{7077}}.
> {code}
> org.apache.spark.SparkException: Invalid master URL: spark://localhost
>       at 
> org.apache.spark.util.Utils$.extractHostPortFromSparkUrl(Utils.scala:2088)
>       at org.apache.spark.rpc.RpcAddress$.fromSparkURL(RpcAddress.scala:47)
>       at 
> org.apache.spark.deploy.client.AppClient$$anonfun$1.apply(AppClient.scala:48)
>       at 
> org.apache.spark.deploy.client.AppClient$$anonfun$1.apply(AppClient.scala:48)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>       at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
>       at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
>       at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
>       at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
>       at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
>       at org.apache.spark.deploy.client.AppClient.<init>(AppClient.scala:48)
>       at 
> org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend.start(SparkDeploySchedulerBackend.scala:93)
>       at 
> org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
>       at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to