[ 
https://issues.apache.org/jira/browse/SPARK-20521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15988342#comment-15988342
 ] 

Apache Spark commented on SPARK-20521:
--------------------------------------

User 'guoxiaolongzte' has created a pull request for this issue:
https://github.com/apache/spark/pull/17798

> The default of  'spark.worker.cleanup.appDataTtl'  should be 604800 in 
> spark-standalone.md.
> -------------------------------------------------------------------------------------------
>
>                 Key: SPARK-20521
>                 URL: https://issues.apache.org/jira/browse/SPARK-20521
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core, Web UI
>    Affects Versions: 2.1.0
>            Reporter: guoxiaolongzte
>            Priority: Minor
>
> Currently, our project needs to be set to clean up the worker directory 
> cleanup cycle is three days.
> When I follow http://spark.apache.org/docs/latest/spark-standalone.html, 
> configure the 'spark.worker.cleanup.appDataTtl' parameter, I configured to 3 
> * 24 * 3600.
> When I start the spark service, the startup fails, and the worker log 
> displays the error log as follows:
> 2017-04-28 15:02:03,306 INFO Utils: Successfully started service 
> 'sparkWorker' on port 48728.
> Exception in thread "main" java.lang.NumberFormatException: For input string: 
> "3 * 24 * 3600"
> at 
> java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
> at java.lang.Long.parseLong(Long.java:430)
> at java.lang.Long.parseLong(Long.java:483)
> at scala.collection.immutable.StringLike$class.toLong(StringLike.scala:276)
> at scala.collection.immutable.StringOps.toLong(StringOps.scala:29)
> at org.apache.spark.SparkConf$$anonfun$getLong$2.apply(SparkConf.scala:380)
> at org.apache.spark.SparkConf$$anonfun$getLong$2.apply(SparkConf.scala:380)
> at scala.Option.map(Option.scala:146)
> at org.apache.spark.SparkConf.getLong(SparkConf.scala:380)
> at org.apache.spark.deploy.worker.Worker.(Worker.scala:100)
> at 
> org.apache.spark.deploy.worker.Worker$.startRpcEnvAndEndpoint(Worker.scala:730)
> at org.apache.spark.deploy.worker.Worker$.main(Worker.scala:709)
> at org.apache.spark.deploy.worker.Worker.main(Worker.scala)
> Because we put 7 * 24 * 3600 as a string, forced to convert to the dragon 
> type, will lead to problems in the program.
> So I think the default value of the current configuration should be a 
> specific long value, rather than 7 * 24 * 3600,should be 604800. Because it 
> would mislead users for similar configurations, resulting in spark start 
> failure.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to