Hi Marcus,

I've been through the MOPSO with GD approach as we discussed earlier, and I
certainly like the idea. My thoughts as of now are:

(1) Add a basic PSO optimizer to the existing optimization API (might refer
to the gradient descent optimizer code for help/coding style).

(2) Add support for contraint based optimization (the AugLagrangian class
has some support for equality constraints by calculating the associated
penalty and keeping it under a threshold; maybe a similar approach will
work here?).

(3) Extend the functionality to multi-objective optimization, this might
require reworking the Evaluate() methods of FunctionTypes to evaluate the
position for multiple objective functions; perhaps a better approach would
be to add another FunctionType which will be used by the MOPSO optimizer

What are your thoughts about this? Especially regarding (2)?

I also had a small doubt: is there a minimum amount of RAM/resources I need
to have on my system? I am running a Fedora 27 on a Core i5 with 4GB of
RAM, and I cannot keep another application open while building the code (I
switched from the distributed tar file to a clone of the repo), not even
atom. Should I consider getting a RAM upgrade?

Thanks and regards,
mlpack mailing list

Reply via email to