fitting, while
big lambda like 300 removes over fitting but the nb of diff items on the top
1 and top 5 of the preference list is very small (not personalized).
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/implicit-ALS-dataSet-tp7067p8115.html
Sent from
On Thu, Jun 19, 2014 at 3:03 PM, redocpot julien19890...@gmail.com wrote:
We did some sanity check. For example, each user has his own item list which
is sorted by preference, then we just pick the top 10 items for each user.
As a result, we found that there were only 169 different items among
On Thu, Jun 19, 2014 at 3:44 PM, redocpot julien19890...@gmail.com wrote:
As the paper said, the low ratings will get a low confidence weight, so if I
understand correctly, these dominant one-timers will be more *unlikely* to
be recommended comparing to other items whose nbPurchase is bigger.
implementation for more
details.
Hao
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/implicit-ALS-dataSet-tp7067p7086.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.
On Thu, Jun 5, 2014 at 10:38 PM, redocpot julien19890...@gmail.com wrote:
can be simplified by taking advantage of its algebraic structure, so
negative observations are not needed. This is what I think at the first time
I read the paper.
Correct, a big part of the reason that is efficient is