[ 
https://issues.apache.org/jira/browse/SPARK-11605?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15030494#comment-15030494
 ] 

yuhao yang edited comment on SPARK-11605 at 11/28/15 1:16 PM:
--------------------------------------------------------------

Return type of org.apache.spark.ml.clustering.LDA.getOldDataset becomes 
scala.Tuple2<java.lang.Object, org.apache.spark.mllib.linalg.Vector>, while it 
should be (long, vector). Although this is private[clustering] method

Some other private[ml] functions have similar issue. Let me know if we should 
list them.


was (Author: yuhaoyan):
return type of org.apache.spark.ml.clustering.LDA.getOldDataset becomes 
scala.Tuple2<java.lang.Object, org.apache.spark.mllib.linalg.Vector>, while it 
should be (long, vector). Although this is private[clustering] method

> ML 1.6 QA: API: Java compatibility, docs
> ----------------------------------------
>
>                 Key: SPARK-11605
>                 URL: https://issues.apache.org/jira/browse/SPARK-11605
>             Project: Spark
>          Issue Type: Sub-task
>          Components: Documentation, Java API, ML, MLlib
>            Reporter: Joseph K. Bradley
>            Assignee: yuhao yang
>
> Check Java compatibility for MLlib for this release.
> Checking compatibility means:
> * comparing with the Scala doc
> * verifying that Java docs are not messed up by Scala type incompatibilities. 
>  Some items to look out for are:
> ** Check for generic "Object" types where Java cannot understand complex 
> Scala types.
> *** *Note*: The Java docs do not always match the bytecode. If you find a 
> problem, please verify it using {{javap}}.
> ** Check Scala objects (especially with nesting!) carefully.
> ** Check for uses of Scala and Java enumerations, which can show up oddly in 
> the other language's doc.
> * If needed for complex issues, create small Java unit tests which execute 
> each method.  (The correctness can be checked in Scala.)
> If you find issues, please comment here, or for larger items, create separate 
> JIRAs and link here.
> Note that we should not break APIs from previous releases.  So if you find a 
> problem, check if it was introduced in this Spark release (in which case we 
> can fix it) or in a previous one (in which case we can create a java-friendly 
> version of the API).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to