If you use Scala, you can do:

  val conf = new SparkConf()
             .setMaster("yarn-client")
             .setAppName("Logistic regression SGD fixed")
             .set("spark.akka.frameSize", "100")
             .setExecutorEnv("SPARK_JAVA_OPTS", " -Dspark.akka.frameSize=100")
    var sc = new SparkContext(conf)


I have been struggling with this too. I was trying to run Spark on the
KDDB website which has about 29M features. It implodes and dies. Let
me know if you are able to figure out how to get things to work well
on really really wide datasets.

Regards,
Krishna

On Mon, Jul 14, 2014 at 10:18 AM, crater <cq...@ucmerced.edu> wrote:
> Hi xiangrui,
>
>
> Where can I set the "spark.akka.frameSize" ?
>
>
>
> --
> View this message in context: 
> http://apache-spark-user-list.1001560.n3.nabble.com/Error-when-testing-with-large-sparse-svm-tp9592p9616.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.

Reply via email to