I really cannot understand what you are saying.

> On Sep 1, 2015, at 5:14 PM, Justin Chang <[email protected]> wrote:
> 
> So in VI, the i-th component/cell/point of x is "corrected" if it is below 
> lb_i or above ub_i. All other x_i that satisfy their respective bounds remain 
> untouched. Hence only the corrected components will violated the original 
> equation

   I think what you say above is backwards. First of all x_i would never be < 
lb_i   or > ub_i since it is constrained to be between them. It is just that 
with a VI IF x_i is lb_i or ub_i then F_i(x) is allowed to be nonzero (that is 
that equation is allowed to violated).

> 
> In optimization, every single x_i will be "shifted" such that every single 
> x_i meets lb_i and ub_i. Hence it's possible all components will violate the 
> original equation

  No. With constraints in optimization, they must all be satisfied for one to 
say that one has a solution.

  With VIs the F(x) is NOT a constraint and should not be thought of as a 
constraint; only the bounds are constraints. Or think of F(x) as a special kind 
of constraint (that doesn't exist with optimization) that is allowed to be 
violated in a a very special way (when its x_i variable is on a constraint).

  Barry


> 
> Is that the general idea here?
> 
> On Tue, Sep 1, 2015 at 12:48 PM, Barry Smith <[email protected]> wrote:
> 
>    I think we are talking past each other.
> 
>    With bound constraint VI's there is no optimization, there is an equation 
> F(x) = 0 (where F may be linear or nonlinear) and constraints  a <= x <= c. 
> With VIs the equation F_i(x) is simply not satisfied if x_i is on a bound 
> (that is x_i = a_i or x_i = b_i),
> 
>    With optimization if you have an equality constraint and inequality 
> constraints; if to satisfy an inequality constraint FORCES an equality 
> constraint to not be satisfied then the constraints are not compatible and 
> the problem isn't properly posed.
> 
>    Barry
> 
> 
> > On Sep 1, 2015, at 4:15 AM, Justin Chang <[email protected]> wrote:
> >
> > But if I add those linear equality constraint equations to my original 
> > problem, would they not be satisfied anyway? Say I add this to my weak form:
> >
> > Ax = b
> >
> > But once i subject x to some bounded constraints, Ax != b. Unless I add 
> > some sort of penalty where extra weighting is added to this property...
> >
> > On Tue, Sep 1, 2015 at 3:02 AM, Matthew Knepley <[email protected]> wrote:
> > On Tue, Sep 1, 2015 at 3:46 AM, Justin Chang <[email protected]> wrote:
> > I would like to simultaneously enforce both discrete maximum principle and 
> > local mass/species balance. Because even if a locally conservative scheme 
> > like RT0 is used, as soon as these bounded constraints are applied, i lose 
> > the mass balance.
> >
> > What I am saying is, can't you just add "linear equality constraints" as 
> > more equations?
> >
> >   Thanks,
> >
> >     Matt
> >
> > On Tue, Sep 1, 2015 at 2:33 AM, Matthew Knepley <[email protected]> wrote:
> > On Tue, Sep 1, 2015 at 3:11 AM, Justin Chang <[email protected]> wrote:
> > Barry,
> >
> > That's good to know thanks.
> >
> > On a related note, is it possible for VI to one day include linear equality 
> > constraints?
> >
> > How are these different from just using more equations?
> >
> >   Thanks,
> >
> >     Matt
> >
> > Thanks,
> > Justin
> >
> > On Mon, Aug 31, 2015 at 7:13 PM, Barry Smith <[email protected]> wrote:
> >
> > > On Aug 31, 2015, at 7:36 PM, Justin Chang <[email protected]> wrote:
> > >
> > > Coming back to this,
> > >
> > > Say I now want to ensure the DMP for advection-diffusion equations. The 
> > > linear operator is now asymmetric and non-self-adjoint (assuming I do 
> > > something like SUPG or finite volume), meaning I cannot simply solve this 
> > > problem without any manipulation (e.g. normalizing the equations) using 
> > > TAO's optimization solvers. Does this statement also hold true for SNESVI?
> >
> >   SNESVI doesn't care about symmetry etc
> >
> > >
> > > Thanks,
> > > Justin
> > >
> > > On Fri, Apr 3, 2015 at 7:38 PM, Barry Smith <[email protected]> wrote:
> > >
> > > > On Apr 3, 2015, at 7:35 PM, Justin Chang <[email protected]> wrote:
> > > >
> > > > I guess I will have to write my own code then :)
> > > >
> > > > I am not all that familiar with Variational Inequalities at the moment, 
> > > > but if my Jacobian is symmetric and positive definite and I only have 
> > > > lower and upper bounds, doesn't the problem simply reduce to that of a 
> > > > convex optimization? That is, with SNES act as if it were Tao?
> > >
> > >   Yes, I think that is essentially correctly.
> > >
> > >   Barry
> > >
> > > >
> > > > On Fri, Apr 3, 2015 at 6:35 PM, Barry Smith <[email protected]> wrote:
> > > >
> > > >   Justin,
> > > >
> > > >    We haven't done anything with TS to handle variational inequalities. 
> > > > So you can either write your own backward Euler (outside of TS) that 
> > > > solves each time-step problem either as 1) an optimization problem 
> > > > using Tao or 2) as a variational inequality using SNES.
> > > >
> > > >    More adventurously you could look at the TSTHETA code in TS (which 
> > > > is a general form that includes Euler, Backward Euler and 
> > > > Crank-Nicolson and see if you can add the constraints to the SNES 
> > > > problem that is solved; in theory this is straightforward but it would 
> > > > require understanding the current code (which Jed, of course, overwrote 
> > > > :-). I think you should do this.
> > > >
> > > >   Barry
> > > >
> > > >
> > > > > On Apr 3, 2015, at 12:31 PM, Justin Chang <[email protected]> wrote:
> > > > >
> > > > > I am solving the following anisotropic transient diffusion equation 
> > > > > subject to 0 bounds:
> > > > >
> > > > > du/dt = div[D*grad[u]] + f
> > > > >
> > > > > Where the dispersion tensor D(x) is symmetric and positive definite. 
> > > > > This formulation violates the discrete maximum principles so one of 
> > > > > the ways to ensure nonnegative concentrations is to employ convex 
> > > > > optimization. I am following the procedures in Nakshatrala and 
> > > > > Valocchi (2009) JCP and Nagarajan and Nakshatrala (2011) IJNMF.
> > > > >
> > > > > The Variational Inequality method works gives what I want for my 
> > > > > transient case, but what if I want to implement the Tao methodology 
> > > > > in TS? That is, what TS functions do I need to set up steps a) 
> > > > > through e) for each time step (also the Jacobian remains the same for 
> > > > > all time steps so I would only call this once). Normally I would just 
> > > > > call TSSolve() and let the libraries and functions do everything, but 
> > > > > I would like to incorporate TaoSolve into every time step.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > --
> > > > > Justin Chang
> > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > University of Houston, Department of Civil and Environmental 
> > > > > Engineering
> > > > > Houston, TX 77004
> > > > > (512) 963-3262
> > > > >
> > > > > On Thu, Apr 2, 2015 at 6:53 PM, Barry Smith <[email protected]> 
> > > > > wrote:
> > > > >
> > > > >   An alternative approach is for you to solve it as a (non)linear 
> > > > > variational inequality. See src/snes/examples/tutorials/ex9.c
> > > > >
> > > > >   How you should proceed depends on your long term goal. What problem 
> > > > > do you really want to solve? Is it really a linear time dependent 
> > > > > problem with 0 bounds on U? Can the problem always be represented as 
> > > > > an optimization problem easily? What are  and what will be the 
> > > > > properties of K? For example if K is positive definite then likely 
> > > > > the bounds will remain try without explicitly providing the 
> > > > > constraints.
> > > > >
> > > > >   Barry
> > > > >
> > > > > > On Apr 2, 2015, at 6:39 PM, Justin Chang <[email protected]> wrote:
> > > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > I have a two part question regarding the integration of the 
> > > > > > following optimization problem
> > > > > >
> > > > > > min 1/2 u^T*K*u + u^T*f
> > > > > > S.T. u >= 0
> > > > > >
> > > > > > into SNES and TS
> > > > > >
> > > > > > 1) For SNES, assuming I am working with a linear FE equation, I 
> > > > > > have the following algorithm/steps for solving my problem
> > > > > >
> > > > > > a) Set an initial guess x
> > > > > > b) Obtain residual r and jacobian A through functions 
> > > > > > SNESComputeFunction() and SNESComputeJacobian() respectively
> > > > > > c) Form vector b = r - A*x
> > > > > > d) Set Hessian equal to A, gradient to A*x, objective function 
> > > > > > value to 1/2*x^T*A*x + x^T*b, and variable (lower) bounds to a zero 
> > > > > > vector
> > > > > > e) Call TaoSolve
> > > > > >
> > > > > > This works well at the moment, but my question is there a more 
> > > > > > "efficient" way of doing this? Because with my current setup, I am 
> > > > > > making a rather bold assumption that my problem would converge in 
> > > > > > one SNES iteration without the bounded constraints and does not 
> > > > > > have any unexpected nonlinearities.
> > > > > >
> > > > > > 2) How would I go about doing the above for time-stepping problems? 
> > > > > > At each time step, I want to solve a convex optimization subject to 
> > > > > > the lower bounds constraint. I plan on using backward euler and my 
> > > > > > resulting jacobian should still be compatible with the above 
> > > > > > optimization problem.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > --
> > > > > > Justin Chang
> > > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > > University of Houston, Department of Civil and Environmental 
> > > > > > Engineering
> > > > > > Houston, TX 77004
> > > > > > (512) 963-3262
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Justin Chang
> > > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > > University of Houston, Department of Civil and Environmental 
> > > > > Engineering
> > > > > Houston, TX 77004
> > > > > (512) 963-3262
> > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Justin Chang
> > > > PhD Candidate, Civil Engineering - Computational Sciences
> > > > University of Houston, Department of Civil and Environmental Engineering
> > > > Houston, TX 77004
> > > > (512) 963-3262
> > >
> > >
> >
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> >
> >
> >
> >
> > --
> > What most experimenters take for granted before they begin their 
> > experiments is infinitely more interesting than any results to which their 
> > experiments lead.
> > -- Norbert Wiener
> >
> 
> 

Reply via email to