The implicit rankings are the output of Tf-idf. I.e.:
Each_ranking= frecuency of an ítem * log(amount of total customers/amount
of customers buying the ítem)

El 14 sept. 2016 17:14, "Sean Owen" <[email protected]> escribió:

> What are implicit rankings here?
> RMSE would not be an appropriate measure for comparing rankings. There are
> ranking metrics like mean average precision that would be appropriate
> instead.
>
> On Wed, Sep 14, 2016 at 9:11 PM, Pasquinell Urbani <
> [email protected]> wrote:
>
>> It was a typo mistake, both are rmse.
>>
>> The frecency distribution of rankings is the following
>>
>> [image: Imágenes integradas 2]
>>
>> As you can see, I have heavy tail, but the majority of the observations
>> rely near ranking  5.
>>
>> I'm working with implicit rankings (generated by TF-IDF), can this affect
>> the error? (I'm currently using trainImplicit in ALS, spark 1.6.2)
>>
>> Thank you.
>>
>>
>>
>> 2016-09-14 16:49 GMT-03:00 Sean Owen <[email protected]>:
>>
>>> There is no way to answer this without knowing what your inputs are
>>> like. If they're on the scale of thousands, that's small (good). If
>>> they're on the scale of 1-5, that's extremely poor.
>>>
>>> What's RMS vs RMSE?
>>>
>>> On Wed, Sep 14, 2016 at 8:33 PM, Pasquinell Urbani
>>> <[email protected]> wrote:
>>> > Hi Community
>>> >
>>> > I'm performing an ALS for retail product recommendation. Right now I'm
>>> > reaching rms_test = 2.3 and rmse_test = 32.5. Is this too much in your
>>> > experience? Does the transformation of the ranking values important for
>>> > having good errors?
>>> >
>>> > Thank you all.
>>> >
>>> > Pasquinell Urbani
>>>
>>
>>
>

Reply via email to