srowen commented on a change in pull request #23589: [SPARK-26351][mllib]Update 
doc and minor correction in the mllib evaluation metrics
URL: https://github.com/apache/spark/pull/23589#discussion_r249256437
 
 

 ##########
 File path: docs/mllib-evaluation-metrics.md
 ##########
 @@ -439,21 +439,21 @@ $$rel_D(r) = \begin{cases}1 & \text{if $r \in D$}, \\ 0 
& \text{otherwise}.\end{
         Precision at k
       </td>
       <td>
-        $p(k)=\frac{1}{M} \sum_{i=0}^{M-1} {\frac{1}{k} 
\sum_{j=0}^{\text{min}(\left|D\right|, k) - 1} rel_{D_i}(R_i(j))}$
+        $p(k)=\frac{1}{M} \sum_{i=0}^{M-1} {\frac{1}{k} 
\sum_{j=0}^{\text{min}(\left|R_i\right|, k) - 1} rel_{D_i}(R_i(j))}$
 
 Review comment:
   Maybe; it could be the same for all users, or not. The documentation above 
this suggests there are equal numbers of recommended and relevant docs for each 
user (Q and N) but at least, it will almost never be true that |D_i| is the 
same for all users. Q could well be a constant. 
   
   But the implementation doesn't assume that and it's not necessary to, so I 
might even just remove the references to Q and N, or label them "Q_i" and "N_i" 
if you really want to be complete.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to