ahmed-mahran commented on code in PR #38996:
URL: https://github.com/apache/spark/pull/38996#discussion_r1045077710


##########
docs/mllib-isotonic-regression.md:
##########
@@ -43,7 +43,17 @@ best fitting the original data points.
 which uses an approach to
 [parallelizing isotonic 
regression](https://doi.org/10.1007/978-3-642-99789-1_10).
 The training input is an RDD of tuples of three double values that represent
-label, feature and weight in this order. Additionally, IsotonicRegression 
algorithm has one
+label, feature and weight in this order. In case there are multiple tuples with
+the same feature then these tuples are aggregated into a single tuple as 
follows:
+
+* Aggregated label is the weighted average of all labels.
+* Aggregated feature is the weighted average of all equal features. It is 
possible

Review Comment:
   Equality is not exact, it is defined by 
[org.apache.commons.math3.util.Precision.equals(double x, double 
y)](https://commons.apache.org/proper/commons-math/javadocs/api-3.6.1/org/apache/commons/math3/util/Precision.html#equals(double,%20double))
 which is true whenever x and y are exactly equal or are adjacent doubles (i.e. 
no other representable doubles exist in between). So, there has to be pooling 
of features too. The weighted average here just guarantees fairness. 
scikit-learn uses first encountered (i.e. minimum) feature.
   
   Definition of equality in scikit-learn is different too:
   ```python
   eps = np.finfo(np.floating).resolution # 1e-15
   if x - current_x >= eps:
     # different
   else:
     # equal
   ```
   
   We can use minimum for consistency, avoid accumulative errors, and for 
better performance too. Do you agree?
   
   What do you suggest regarding definition of equality?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to