Hello Adeel,

> I have done some research on C++ lambda functions. Did you mean to use these
> instead of the standard accessors and mutators? From what I have found, lambda
> functions are used for writing an anonymous inline functor right into the spot
> where it is called, like in this example below (source):
> 
> std::for_each(v.begin(), v.end(), [](int) { /* do something here */ });
> 
> Although they can be used to modify the parameters (passed in a capture list) 
> by
> using the mutable keyword, I don't know what advantage this would have over 
> the
> standard accessors and mutators. If you had a different use in mind, please 
> let
> me know.

I was thinking to use the C++11 lambda functions to define the constraints
instead of using a matrix representation, I had something like this in mind:

auto constraint = [](double x) { return x < 3; };
std::cout << "constraint: " << constraint(6) << std::endl;

I think it might be a good idea to work on a proof of concept before deciding on
the design, what do you think?

> I have read some sections from the Velocity Adaptation in Particle Swarm
> Optimization paper. The PSO variant presented there is somewhat similar to PSO
> with inertia weight in Looking Inside Particle Swarm Optimization in 
> Constrained
> Search Spaces paper. The algorithm presented in section 4 for PSO with 
> Velocity
> Adaptation uses Velocity Length l for scaling the particle velocity based on 
> its
> current behavior. There are various initialization methods for setting the
> initial value of velocity length, such as l = r, l = r / sqrt(n). If I opt to
> implement this PSO variant in my GSoC application, would I leave it to the 
> user
> for specifying the value of l, or set it by default following a heuristic, or
> maybe a combination of both?

I think it's just fine, to let the user select the value, however, we should
note that there are some good initial values in the documentation and examples.
Does this sound reasonable?

Thanks,
Marcus


> On 20. Jan 2018, at 11:19, Adeel Ahmad <adeelahma...@hotmail.com> wrote:
> 
> Hello Marcus,
> 
> I have done some research on C++ lambda functions. Did you mean to use these 
> instead of the standardaccessors and mutators? From what I have found, lambda 
> functions are used for writing an anonymous inline functor right into the 
> spot where it is called, like in this example below (source 
> <https://stackoverflow.com/a/7627218>):
> 
> std::for_each(v.begin(), v.end(), [](int) { /* do something here */ });
> 
> Although they can be used to modify the parameters (passed in a capture list) 
> by using the mutable keyword, I don't know what advantage this would have 
> over the standard accessors and mutators. If you had a different use in mind, 
> please let me know.
> 
> Yes, a policy based design seems like a much better option for implementing 
> the optimizer. We could create a base class named PSO and use its methods in 
> another class, for instance, LBPSO; using the former class' object. This 
> would be more intuitive if other variants of PSO are to be implemented in the 
> future.
> 
> I have read some sections from the Velocity Adaptation in Particle Swarm 
> Optimization paper. The PSO variant presented there is somewhat similar to 
> PSO with inertia weight in Looking Inside Particle Swarm Optimization in 
> Constrained Search Spaces paper. The algorithm presented in section 4 for PSO 
> with Velocity Adaptation uses Velocity Length l for scaling the particle 
> velocity based on its current behavior. There are various initialization 
> methods for setting the initial value of velocity length, such as l = r, l = 
> r / sqrt(n). If I opt to implement this PSO variant in my GSoC application, 
> would I leave it to the user for specifying the value of l, or set it by 
> default following a heuristic, or maybe a combination of both?
> 
> Thank you,
> Adeel
> 
> From: Marcus Edel <marcus.e...@fu-berlin.de <mailto:marcus.e...@fu-berlin.de>>
> Sent: Thursday, January 18, 2018 6:51 PM
> To: Adeel Ahmad
> Cc: mlpack@lists.mlpack.org <mailto:mlpack@lists.mlpack.org>
> Subject: Re: [mlpack] Query regarding constrained and unconstrained methods
>  
> Hello Adeel,
> 
>> I have read the research paper you linked. In the paper, two variants of PSO 
>> are
>> mentioned -- inertia weight and constriction factor based. It is stated that 
>> the
>> local-best particle swarm optimizer (LBPSO) with constriction k produces the
>> best results. I assume all variants must be implemented for GSoC, however, in
>> the paper a modified version of PSO is presented (MPSO), which dynamically
>> updates two hyper-parameters, k and c2 (acceleration constant for social
>> elements in the swarm), should this be implemented as well? I suppose this 
>> won't
>> be time consuming if vanilla PSO is already in place.
> 
> I'm not sure it would be reasonable to implement every variant mentioned in 
> the
> paper over the summer, keep in mind that each method has to be tested (writing
> good tests is time-consuming). So my recommendation is, focus on a single
> variant, in your proposal you can point out that if there is time left you aim
> for another variant. But at the end it's up to you, choose the methods you 
> think
> are interesting. Also, there is another interesting paper that might be
> interesting as well: "Particle Swarm Optimization with Velocity Adaptation" 
> by S.
> Helwig et al. (let me know if you can't access the paper).
> 
>> Regarding the design of the optimizer itself, it was pointed out earlier by 
>> Ryan
>> that the SDP (semidefinite program) optimizer supports constraints. In there,
>> the constraints are specified as Armadillo matrices, and set using setters. I
>> think the same methodology could be applied for PSO.
> 
> Right, as pointed out on the ideas page a matrix representation is definitely
> one option another would be to use C++11 lambda functions:
> https://en.wikipedia.org/wiki/C%2B%2B11#Lambda_functions_and_expressions 
> <https://en.wikipedia.org/wiki/C%2B%2B11#Lambda_functions_and_expressions> 
> which I
> C++11 - Wikipedia 
> <https://en.wikipedia.org/wiki/C%2B%2B11#Lambda_functions_and_expressions>
> en.wikipedia.org <http://en.wikipedia.org/>
> C++11 is a version of the standard for the programming language C++. It was 
> approved by International Organization for Standardization (ISO) on 12 August 
> 2011 ...
> think would be easier to use as someone could naturally define the 
> constraints.
> Let me know what you think, coming up with a good structure is part of the
> project.
> 
>> For specifying whether the
>> PSO is local or global, a boolean could be used. However, the constriction
>> factor k should only be created in case of constriction based PSO, I'm not 
>> sure
>> what would be the best design for this.
> 
> 
> Another option would be to use a policy based design, provide a separate class
> for each method and reuse as much code as possible internally. We do something
> similar for Adam, RmsProp, etc. each optimizer basically uses the SGD class 
> and
> all we do is to provide a wrapper class to set optimizer specific parameter. 
> Let
> me know what you think.
> 
>> Would it be possible for us to discuss the optimizer architecture in more 
>> detail
>> on the mailing list?
> 
> Absolutely, we are here to help.
> 
> Thanks,
> Marcus
> 
> 
>> On 18. Jan 2018, at 08:54, Adeel Ahmad <adeelahma...@hotmail.com 
>> <mailto:adeelahma...@hotmail.com>> wrote:
>> 
>> Hello Marcus,
>> 
>> I have read the research paper you linked. In the paper, two variants of PSO 
>> are mentioned -- inertia weight and constriction factor based. It is stated 
>> that the local-best particle swarm optimizer (LBPSO) with constriction k 
>> produces the best results. I assume all variants must be implemented for 
>> GSoC, however, in the paper a modified version of PSO is presented (MPSO), 
>> which dynamically updates two hyper-parameters, k and c2 (acceleration 
>> constant for social elements in the swarm), should this be implemented as 
>> well? I suppose this won't be time consuming if vanilla PSO is already in 
>> place.
>> 
>> Regarding the design of the optimizer itself, it was pointed out earlier by 
>> Ryan that the SDP (semidefinite program) optimizer supports constraints. In 
>> there, the constraints are specified as Armadillo matrices, and set using 
>> setters. I think the same methodology could be applied for PSO. For 
>> specifying whether the PSO is local or global, a boolean could be used. 
>> However, the constriction factor k should only be created in case of 
>> constriction based PSO, I'm not sure what would be the best design for this.
>> 
>> Would it be possible for us to discuss the optimizer architecture in more 
>> detail on the mailing list?
>> 
>> Thank you,
>> Adeel
>> 
>> 
>> 
>> 
>> From: Marcus Edel <marcus.e...@fu-berlin.de 
>> <mailto:marcus.e...@fu-berlin.de>>
>> Sent: Wednesday, January 17, 2018 5:39 PM
>> To: Adeel Ahmad
>> Cc: mlpack@lists.mlpack.org <mailto:mlpack@lists.mlpack.org>
>> Subject: Re: [mlpack] Query regarding constrained and unconstrained methods
>>  
>> Hello Adeel,
>> 
>> sorry for the slow reponse on this one. There are various approaches to solve
>> constrained problems; one is the use of a penalty function. The constrained
>> problem is transformed to an unconstrained one, by penalizing the 
>> constraints so
>> that it can be solved using an unconstrained optimization method. You might 
>> take
>> a look at: "Looking Inside Particle Swarm Optimization in Constrained Search
>> Spaces" by Jorge Isacc Flores-Mendoza and Efrén Mezura-Montes they describe
>> various PSO method to solve constrained problems.
>> 
>>> I apologize if I misunderstood what constrained problems are, but can't we 
>>> apply
>>> constraints to the methods already present in "src/mlpack/methods/*" 
>>> directory?
>>> Or, are these unrelated? In the latter case, are there some specialized 
>>> methods
>>> for constrained problems that need to be implemented for this project?
>> 
>> 
>> Currently, mlpack does not implement an optimizer that can handle constrained
>> problems. So for example, if you like to solve the constrained (cube, line)
>> Rosenbrock function:
>> 
>> f(x, y) = (1 - x)^2 + 100(y - x^2)^2
>> 
>> with constraints (x - 1)^3 - y +1 < 0 and x + y - 2 < 0
>> 
>> Currently, there is no structure to represent the problem and there is no
>> optimizer that can solve the constrained problem. Comming up with a 
>> structure is
>> one part of the project implementing an optimizer (PSO) that can handle
>> constrained problems is the other part. But as pointed out in the project 
>> idea,
>> it's recommended to start with a PSO implementation for unconstrained 
>> problems
>> and to extend the work later on.
>> 
>>> Regarding the test cases structuring, I've found that in some cases a
>>> test_function.cpp or <method_name>_test_function.cpp file is present in the 
>>> main
>>> method directory, such as here 
>>> (https://github.com/mlpack/mlpack/blob/master/src 
>>> <https://github.com/mlpack/mlpack/blob/master/src>
>>>  <https://github.com/mlpack/mlpack/blob/master/src> 
>>> mlpack/mlpack <https://github.com/mlpack/mlpack/blob/master/src>
>>> github.com <http://github.com/>
>>> mlpack: a scalable C++ machine learning library --
>>> /mlpack/core/optimizers/gradient_descent/test_function.cpp). Later, an 
>>> object of
>>> this class is created in the main tests directory ("src/mlpack/tests/*"), in
>>> this case, here 
>>> (https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/g 
>>> <https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/g>
>>> radient_descent_test.cpp). So, my question is this, what is the preferred
>>> structure for writing test cases? In this case, I think this could have been
>>> directly tested without the need of a separate GDTestFunction class, 
>>> however,
>>> this might not have been a neat alternative.
>> 
>> There is an open PR which consolidates different problems into one folder
>> (https://github.com/mlpack/mlpack/pull/1151 
>> <https://github.com/mlpack/mlpack/pull/1151>); the benefit for not 
>> implementing
>>  <https://github.com/mlpack/mlpack/pull/1151>        
>> Optimization Test Problems by zoq · Pull Request #1151 · mlpack/mlpack 
>> <https://github.com/mlpack/mlpack/pull/1151>
>> github.com <http://github.com/>
>> Common functions used for testing optimization algorithms, will add more 
>> functions and test integrations once we agree on this.
>> the test function inside the test itself, is that someone could reuse the
>> functionality for other methods/tests. One example is the SGDTestFunction 
>> which
>> is used to test Adam, SGD, RMSProp, etc.
>> 
>> I hope this is helpful, let us know if we should clarify anything.
>> 
>> Thanks,
>> Marcus
>> 
>> 
>>> On 16. Jan 2018, at 19:58, Adeel Ahmad <adeelahma...@hotmail.com 
>>> <mailto:adeelahma...@hotmail.com>> wrote:
>>> 
>>> Greetings,
>>> 
>>> I'm following a potential idea for GSoC 2018 titled "Particle swarm 
>>> optimization". I have read a few documents and familiarized myself with the 
>>> algorithm. It's listed in the idea description: "So this project is divided 
>>> into two parts: First implement one or two unconstrained methods and 
>>> afterwards takes a look at one --contained-- (constrained [?]) method". I 
>>> apologize if I misunderstood what constrained problems are, but can't we 
>>> apply constraints to the methods already present in "src/mlpack/methods/*" 
>>> directory? Or, are these unrelated? In the latter case, are there some 
>>> specialized methods for constrained problems that need to be implemented 
>>> for this project?
>>> 
>>> Regarding the test cases structuring, I've found that in some cases a 
>>> test_function.cpp or <method_name>_test_function.cpp file is present in the 
>>> main method directory, such as here ( 
>>> <https://github.com/mlpack/mlpack/blob/3d3d733ba3c41c4f51764f44185767384ab6d9c7/src/mlpack/core/optimizers/gradient_descent/test_function.cpp>https://github.com/mlpack/mlpack/blob/master/src/mlpack/core/optimizers/gradient_descent/test_function.cpp
>>>  
>>> <https://github.com/mlpack/mlpack/blob/master/src/mlpack/core/optimizers/gradient_descent/test_function.cpp>).
>>>  Later, an object of this class is created in the main tests directory 
>>> ("src/mlpack/tests/*"), in this case, here ( 
>>> <https://github.com/mlpack/mlpack/blob/97fd47de5e0a51adf3c01957f6646eb5cc3651d5/src/mlpack/tests/gradient_descent_test.cpp>https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/gradient_descent_test.cpp
>>>  
>>> <https://github.com/mlpack/mlpack/blob/master/src/mlpack/tests/gradient_descent_test.cpp>).
>>>  So, my question is this, what is the preferred structure for writing test 
>>> cases? In this case, I think this could have been directly tested without 
>>> the need of a separate GDTestFunction class, however, this might not have 
>>> been a neat alternative.
>>> 
>>> Thank you,
>>> Adeel
>>> _______________________________________________
>>> mlpack mailing list
>>> mlpack@lists.mlpack.org <mailto:mlpack@lists.mlpack.org>
>>> http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack 
>>> <http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack>
>> 
> 
> 

_______________________________________________
mlpack mailing list
mlpack@lists.mlpack.org
http://knife.lugatgt.org/cgi-bin/mailman/listinfo/mlpack

Reply via email to