(following up a rather old thread:)
Hi Christopher,
I understand how you might use nearest neighbors for item-item
recommendations, but how do you use it for top N items per user?
Thanks!
Apu
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale
wrote:
Yes, thats what prediction should be doing, taking dot products or sigmoid
function for each user,item pair. For 1 million users and 10 K items data
there are 10 billion pairs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked
, taking dot products or sigmoid
function for each user,item pair. For 1 million users and 10 K items data
there are 10 billion pairs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10107.html
Sent from the Apache
million users and 10 K items data
there are 10 billion pairs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10107.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
these flops faster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10183.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
to leverage GPU
capability of nodes for performing these flops faster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10183.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
. It will be great to have
builtin GPU support in SPARK for faster computations to leverage GPU
capability of nodes for performing these flops faster.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10183.html
Sent from
if
that don't work. I will look into annoy. Thanks.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10212.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
partitions
before doing above steps, still it was of no help.
I am using about 100 executor , 2 core, each executor with 2gb RAM.
Are there any suggestions to make these predictions fast?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked
We are using RegressionModels that comes with *mllib* package in SPARK.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked-recommendation-tp10098p10103.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
Yes, thats what prediction should be doing, taking dot products or sigmoid
function for each user,item pair. For 1 million users and 10 K items data
there are 10 billion pairs.
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Large-scale-ranked
12 matches
Mail list logo