Hi,

I've looked through the methods in benchmark system and mlpack,
here is a list of some not benchmarked methods:

1. adaBoost
2. ann
3. dbscan
4. decision tree(some pr have been made)
5. gmm
6. hoeffding tree
7. mean shift clustering
8. svd
9. softmax regression

As mentioned in GSoC idea list, one choice is to benchmark some of these
methods
against other implementation. I think it's not hard for me, the main work
is
reading API of mlpack and other libraries.

Another idea is to speed up some method in mlpack, this one is much more
difficult, and time consuming. Even though this idea is appealing to me,
I don't have much confidence on the target of "the fastest of all the
implementations". I think how much can we improve the speed and how to do it
can only reveal after I have done enough research and experiment on one
method.

I plan to take the benchmarking script as the base of my proposal, and if
some
method is slower, try to do some analysis. If I find something, I may start
to
do the speed up task on one method.

There is no executable of ann now, so I should write a executable for this
task
and do benchmarking on it? Or is it the time to provide a executable for
ann?
(seems ann is in developing now, if it's needed to write a wrapper of whole
ann
or a specific type of ann, I'm glad to do this work)

Sincerely,
Thyrix Yang
_______________________________________________
mlpack mailing list
[email protected]
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to