Thank you very much. I have currently fixed question 3 by using IPOPT. 



I am also wondering how to define nonlinear cost if my nonlinear cost function 
needs to use more arguments other than x. Below is a sample code to make it 
more clear:




mpc1=add_userfcn(mpc, formulation, @userfcn, args)




function om = userfcn(om, mpopt, args)

om.add_nln_cost('c', 1, fcn, {'Pg'})

end




In the above code, the nonlinesr cost and it's corresponding gradient and 
hessian will be computed by a function named fcn here. In my case, the fcn will 
not only depend on the variable set 'Pg' but also depend on some other data 
named args, as shown in the first line of the command. However, in the 
documentation, I found that the fcn donot support more arguments other than 
variable sets x. I only found the interface like this: [f,df,d2f]=fcn(x), which 
means I cannot use args inside fcn.




So if I want to use args in fcn, the only way that I can come up with is to 
define args as a global variable. Would you please kindly advise me how can I 
use args inside fcn more elegantly?




Thank you very much.




Best,

Yuanxi Wu
















At 2023-04-07 23:22:44, "Ray Daniel Zimmerman" <r...@cornell.edu> wrote:

1) Yes, that is correct.
2) Yes, the individual cost terms are simply summed together. And I believe a 
negative definite Q should be fine.
3) This is a much harder question. I’m afraid I don’t have a definitive answer 
to why the matrix for the Newton step in the MIPS primal-dual interior point 
method can become so ill-conditioned. And I was just going to suggest … You 
might try a different solver, like IPOPT, Knitro or fmincon. They all use 
interior point methods, but they have additional features to help them get 
around this sort of ill-conditioning. My guess is just that fmincon is a bit 
more robust than MIPS.


    Ray




On Apr 7, 2023, at 2:02 AM, seuyxw <seuw...@163.com> wrote:


As for question 3), I tried to solve the problem with the default MIPS solver 
and it returned numerically failed. However, if I change the solver to FMINCON, 
it will converge. So does that mean my problem in question 3) has something to 
do with the solver chosen?
(Sorry to bother you so much)









At 2023-04-07 11:08:06, "seuyxw" <seuw...@163.com> wrote:

Dear all,


I am trying to use callback functions in the formulation stage to consider a 
user-defined quadratic objective. Specifically, my objective function does not 
consider the generation cost, and it only considers maximizing the Euclidean 
distance between the decision variables and several given points (a detailed 
expression is given in the picture below).


To realize the above goal, I set all the coefficients in mpc.gencost to 0. I 
also use om.add_quad_cost function in the formulation stage to add the 
quadratic function. My questions are:


1) Am I in the right way to ignore generation cost by setting all the 
coefficients to 0?
2) If I use om.add_quad_cost several times in my userfcn, does it mean that the 
objective is summed up sequentially? And does the Q matrix support the 
negative-definite matrix? In my case, I want to maximize the Euclidean distance 
and hence, the Q matrix will be negative definite. 
3) Sometimes I will encounter this warning: Matrix is close to singular or 
badly scaled. Results may be inaccurate. RCOND = 2.691450e-17. However, this 
optimal power flow should have at least one solution because I will get one if 
I simply set the objective function as 0. Do you think this warning is caused 
by my objective function? And if so, does it arise from the nonconvexity of my 
quadratic objective?


Any help will be appreciated. Thank you in advance.


Best,
Yuanxi Wu


<image.png>


Reply via email to