Hi Tarek, Thanks for your interest & for checking the guidelines first! On 2 points:
Algorithm: PCA is of course a critical algorithm. The main question is how your algorithm/implementation differs from the current PCA. If it's different and potentially better, I'd recommend opening up a JIRA for explaining & discussing it. Java/Scala: We really do require that algorithms be in Scala, for the sake of maintainability. The conversion should be doable if you're willing since Scala is a pretty friendly language. If you create the JIRA, you could also ask for help there to see if someone can collaborate with you to convert the code to Scala. Thanks! Joseph On Mon, May 18, 2015 at 3:13 AM, Tarek Elgamal <tarek.elga...@gmail.com> wrote: > Hi, > > I would like to contribute an algorithm to the MLlib project. I have > implemented a scalable PCA algorithm on spark. It is scalable for both tall > and fat matrices and the paper around it is accepted for publication in > SIGMOD 2015 conference. I looked at the guidelines in the following link: > > > https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark#ContributingtoSpark-MLlib-specificContributionGuidelines > > I believe that most of the guidelines applies in my case, however, the > code is written in java and it was not clear in the guidelines whether > MLLib project accepts java code or not. > My algorithm can be found under this repository: > https://github.com/Qatar-Computing-Research-Institute/sPCA > > Any help on how to make it suitable for MLlib project will be greatly > appreciated. > > Best Regards, > Tarek Elgamal > > > >