do use spark 1.5.0 and apache hadoop 2.6.0 (spark 1.4.1 + apache hadoop 2.6.0
is a typo), sorry
Thanks,
Allen
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2015年12月15日 22:59
收件人: 张志强(旺轩)
抄送: Saisai Shao; dev
主题: Re: spark with label nodes in yarn
Please upgrade to Spark 1.5.x
1
Oops...
I do use spark 1.5.0 and apache hadoop 2.6.0 (spark 1.4.1 + apache hadoop 2.6.0
is a typo), sorry
Thanks,
Allen
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2015年12月15日 22:59
收件人: 张志强(旺轩)
抄送: Saisai Shao; dev
主题: Re: spark with label nodes in yarn
Please upgrade to
containers by setting
spark.yarn.executor.nodeLabelExpression property. My question,
https://issues.apache.org/jira/browse/SPARK-7173 will fix this?
Thanks
Allen
发件人: Ted Yu [mailto:yuzhih...@gmail.com]
发送时间: 2015年12月15日 17:39
收件人: 张志强(旺轩)
抄送: dev@spark.apache.org
主题: Re: spark with
Hi all,
Has anyone tried label based scheduling via spark on yarn? I've tried that,
it didn't work, spark 1.4.1 + apache hadoop 2.6.0
Any feedbacks are welcome.
Thanks
Allen
Yes,
We already used ALS in our production environment, we also want to try SVD++
but it has no python interface.
Any idea? Thanks
-Allen
发件人: Yanbo Liang [mailto:yblia...@gmail.com]
发送时间: 2015年12月3日 10:30
收件人: 张志强(旺轩)
抄送: dev@spark.apache.org
主题: Re: query on SVD++
You means
Hi All,
I came across the SVD++ algorithm implementation in Spark code base, but I
was wondering why we didn't expose the scala api interface to python?
Any plan to do this?
BR,
-Allen Zhang
What’s your spark version?
发件人: wyphao.2007 [mailto:wyphao.2...@163.com]
发送时间: 2015年11月26日 10:04
收件人: user
抄送: dev@spark.apache.org
主题: Spark checkpoint problem
I am test checkpoint to understand how it works, My code as following:
scala> val data = sc.parallelize(List("a", "b",
I agreed
+1--发件人:Reynold
Xin日 期:2015年11月20日
06:14:44收件人:dev@spark.apache.org; Sean
Owen; Thomas Graves主 题:Dropping support
for earlier Hadoop versions in Spark 2.0?I proposed dropping support for Hadoop
1.x in the Spark 2.0 email,
How do I to get a NEW RDD that has a number of elements that I specified?
Sample()? It has no the number parameter, takeSample() it returns as a list?
Help, please.
Hi everyone,
I am facing a requirement that I want to split one RDD into some small ones:
but I want to split it according to its Key element value , e.g: for those
its key is X, they gonna be in RDD1; for those its key is Y, they gonna be
in RDD2 , and so on.
I know it has a routine ca
10 matches
Mail list logo