Hello Chenzhe,

> I guess I need to make up a whole plan and write it into the proposal?


A good plan forms the basis for a good and comprehensive proposal. There are a
lot of useful tips out there like https://developers.google.com/open- 
<https://developers.google.com/open->
source/gsoc/resources/manual may be helpful.

Thanks,
Marcus

> On 16 Mar 2017, at 19:37, Chenzhe Diao <williamd...@gmail.com> wrote:
> 
> Thanks Ryan! I will read the code in more details and think of it. I guess I 
> need to make up a whole plan and write it into the proposal?
> 
> Great to know you guys Shangtong and Bang. Since you both have experiences in 
> this GSOC, I can learn a lot from you.
> 
> Best,
> Chenzhe
> 
> On Thu, Mar 16, 2017 at 9:02 AM, Shangtong Zhang 
> <zhangshangtong....@gmail.com <mailto:zhangshangtong....@gmail.com>> wrote:
> Haha, maybe we don’t know each other. We are from 3 different departments, 
> CS, ECE and MATH.
> But I think it’s a good chance to know each other.
> 
> Shangtong Zhang,
> First year graduate student,
> Department of Computing Science,
> University of Alberta
> Github <https://github.com/ShangtongZhang> | Stackoverflow 
> <http://stackoverflow.com/users/3650053/slardar-zhang>
>> On Mar 16, 2017, at 08:11, Ryan Curtin <r...@ratml.org 
>> <mailto:r...@ratml.org>> wrote:
>> 
>> On Wed, Mar 15, 2017 at 05:33:15AM -0600, Chenzhe Diao wrote:
>>> Hello everyone,
>>> 
>>> My name is Chenzhe. I am a 4th year Ph.D. student in Applied Mathematics
>>> from University of Alberta in Canada. Part of my research is about image
>>> recovery using over-complete systems (wavelet frames), which involves some
>>> machine learning techniques, and uses sparse optimization techniques as one
>>> of the key steps. So I am quite interested in the project about "Low
>>> rank/sparse optimization using Frank-Wolfe".
>>> 
>>> I checked the mailing list from last year. It seems that there was one
>>> student from GSOC16 interested in a similar project. Is that still not done
>>> for some special difficulties? I took a brief look of the Martin Jaggi 
>>> paper,
>>> it seems that the algorithm is not complicated by itself. So I guess most
>>> of the time for the project would be to implement the algorithm in desired
>>> form, and to make extensive tests? What kinds of tests are we expecting?
>>> 
>>> Also, I checked src/mlpack/core/optimizers/ and I saw the GradientDescent
>>> class implemented. I guess I need to write a new class in similar structure?
>> 
>> Hi Chenzhe,
>> 
>> Do you know Shangtong Zhang?  He is a first-year MSc student who also
>> attends the University of Alberta.  Or Bang Liu?  He also is a PhD
>> student at UofA and was a part of mlpack GSoC last year.  Maybe you guys
>> all know each other?  It seems like it's a big university though (nearly
>> 40k students) so maybe the chances are small. :)
>> 
>> Nobody implemented the Frank-Wolfe optimizer from last year, so the
>> project (and related projects) are still open.  Anything you find in
>> src/mlpack/core/optimizers/ is what we have, although there are a few
>> open PRs related to this issue:
>> 
>> https://github.com/mlpack/mlpack/issues/893 
>> <https://github.com/mlpack/mlpack/issues/893>
>> 
>> But those are not F-W, those are basically other optimizers related to
>> SGD.
>> 
>> Essentially you are right, the idea of the project would be to provide
>> an implementation of the algorithm in Jaggi's paper.  In your case given
>> your background and expertise, this will probably be a relatively
>> straightforward task.  Testing the algorithm has some difficulty but
>> honestly I suspect it can be tested mostly like the other optimizers:
>> come up with some easy and hard problems to optimize, and make sure that
>> the implemented F-W algorithm can successfully find the minimum.  You
>> can take a look at the existing tests for other optimizers in
>> src/mlpack/tests/ to get some kind of an idea for how to do that.
>> 
>> Building on top of that, there are many further places you could go with
>> the project:
>> 
>> * you could modify the various mlpack programs like
>>   mlpack_logistic_regression and mlpack_softmax_regression and so
>>   forth to expand the list of available optimizers
>> 
>> * you could benchmark the F-W optimizer against other optimizers on
>>   various problems and possibly (depending on the results) assemble
>>   something that could be published
>> 
>> * you could try implementing some new ideas based on the stock F-W
>>   optimizer and see if they give improvement
>> 
>> * you could implement an additional optimizer
>> 
>> * you could implement an algorithm that is meant to use the F-W
>>   optimizer, like maybe some of the F-W SVM work that Jaggi also did?
>>   That might be too much for a single summer though...
>> 
>> In either case, the choice is up to you---the project idea is there as
>> kind of a boilerplate starting point for whatever ideas you would find
>> most interesting.
>> 
>> Thanks,
>> 
>> Ryan
>> 
>> -- 
>> Ryan Curtin    | "Avoid the planet Earth at all costs."
>> r...@ratml.org <mailto:r...@ratml.org> |   - The President
>> _______________________________________________
>> mlpack mailing list
>> mlpack@lists.mlpack.org <mailto:mlpack@lists.mlpack.org>
>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack 
>> <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
> 
> _______________________________________________
> mlpack mailing list
> mlpack@lists.mlpack.org
> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

_______________________________________________
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to