this and can tolerate some approximation, I
think you want to do some kind of location sensitive hashing to bucket
the vectors and then evaluate similarity to only the other items in
the bucket.
On Fri, Apr 25, 2014 at 5:55 AM, Qin Wei [hidden email] wrote:
Hi All,
I have a problem
Hi All,
I have a problem with the Item-Based Collaborative Filtering Recommendation
Algorithms in spark.
The basic flow is as below:
(Item1, (User1 ,
Score1))
RDD1 ==(Item2, (User2 , Score2))
.
Thanks for your help!
qinwei
From: Andre Bois-Crettez [via Apache Spark User List]Date: 2014-04-16 17:50To:
Qin WeiSubject: Re: Spark program thows OutOfMemoryError
Seem you have not enough memory on the spark driver. Hints below :
On 2014-04-15 12:10, Qin Wei wrote:
val
Hi, all
My spark program always gives me the error java.lang.OutOfMemoryError: Java
heap space in my standalone cluster, here is my code:
object SimCalcuTotal {
def main(args: Array[String]) {
val sc = new SparkContext(spark://192.168.2.184:7077, Sim Calcu
Total,