I need to optimize a multivariate function f(w, x, y, z, ...) under an absolute
value constraint. For instance:
min { (2x+y) (w-z) }
under the constraint:
|w| + |x| + |y| + |z| = 1.0 .
Is there any R function that does this? Thank you for your help!
Phil Xiang
this should be possible in the lasso2 package.
url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax: 217-244-6678Champaign, IL 61820
On Sep
On 9/7/07, Phil Xiang [EMAIL PROTECTED] wrote:
I need to optimize a multivariate function f(w, x, y, z, ...) under an
absolute value constraint. For instance:
min { (2x+y) (w-z) }
under the constraint:
|w| + |x| + |y| + |z| = 1.0 .
Is there any R function that does this? Thank
Try this.
1. following Ben remove the Randalstown point and reset the levels of the
Location factor.
2. then replace solve with ginv so it uses the generalized inverse to calculate
the hessian:
alan2 - subset(alan, subset = Location != Randalstown)
alan2$Location -
Hello Folks,
Very new to R so bear with me, running 5.2 on XP. Trying to do a zero-inflated
negative binomial regression on placental scar data as dependent. Lactation,
location, number of tick larvae present and mass of mouse are independents.
Dataframe and attributes below:
Location
Lac and Lacfac are the same.
On 8/21/07, Alan Harrison [EMAIL PROTECTED] wrote:
Hello Folks,
Very new to R so bear with me, running 5.2 on XP. Trying to do a
zero-inflated negative binomial regression on placental scar data as
dependent. Lactation, location, number of tick larvae present
(Hope this gets threaded properly. Sorry if it doesn't.)
Gabor: Lac and Lacfac being the same is irrelevant, wouldn't
produce NAs (but would produce something like a singular Hessian
and maybe other problems) -- but they're not even specified in this
model.
The bottom line is that you
Dear R users,
Imagine please an optimization problem:
minimize sum S1+S2
Subject to : y - x = a + S1
x - y = a + S2
and we want to add two more constraints:
y - x = b - S3
x - y = b - S4
where a is a small
On 7/16/07, massimiliano.talarico [EMAIL PROTECTED] wrote:
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
radq((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
x1=0;
x1=1;
x2=0;
x2=1;
x3=0;
x3=1;
I'm sorry the function is
sqrt((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
Have you any suggests.
Thanks,
Massimiliano
What is radq?
--- massimiliano.talarico
[EMAIL PROTECTED] wrote:
Dear all,
I need a suggest to obtain the max of this function:
Max
G'day Massimiliano,
On Mon, 16 Jul 2007 22:49:32 +0200
massimiliano.talarico [EMAIL PROTECTED] wrote:
Dear all,
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
My apologies, didn't see the boundary constraints. Try this one...
f - function(x)
(sqrt((x[1]*0.114434)^2+(x[2]*0.043966)^2+(x[3]*0.100031)^2)-0.04)^2
optim(par=rep(0,3),f,lower=rep(0,3),upper=rep(1,3),method=L-BFGS-B)
and check ?optim
--- massimiliano.talarico [EMAIL PROTECTED] wrote:
f - function(x)
(sqrt((x[1]*0.114434)^2+(x[2]*0.043966)^2+(x[3]*0.100031)^2)-0.04)^2
optim(c(0,0,0),f)
see ?optim for details on arguments, options, etc.
--- massimiliano.talarico [EMAIL PROTECTED] wrote:
I'm sorry the function is
Dear all,
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
sqrt((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
x1=0;
x1=1;
x2=0;
x2=1;
x3=0;
x3=1;
Any suggests ?
Thanks in advanced,
Massimiliano
Thanks for your suggests, but I need to obtain the MAX of
this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
sqrt((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
x1=0;
x2=0;
x3=0;
Thanks and again Thanks,
Massimiliano
My apologies, didn't see
My apologies, I read the post over too quickly (even the second time).
It's been a while since I've played around with anything other than box
constraints, but this one is conducive to a brute-force approach (employing
Berwin suggestions). The pseudo-code would look something like this:
delta -
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of massimiliano.talarico
Sent: Monday, July 16, 2007 4:50 PM
To: r-help
Subject: [R] Optimization
Dear all,
I need a suggest
] On Behalf
Of massimiliano.talarico
Sent: Monday, July 16, 2007 4:50 PM
To: r-help
Subject: [R] Optimization
Dear all,
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
radq((x1*0.114434)^2+(x2*0.043966)^2
G'day Moshe,
On Tue, 17 Jul 2007 17:32:52 -0700 (PDT)
Moshe Olshansky [EMAIL PROTECTED] wrote:
This is partially true since both the function to be
maximized and the constraint are non-linear.
I am not sure what your definition of non-linear is, but in my book,
and I believe by most
You are right!!!
For some strange reason I substituted ^
(exponentiation) for *, so the problem became
Max x1^0.021986+x2^0.000964+x3^0.02913
with these conditions:
x1+x2+x3=1;
sqrt((x1^0.114434)^2+(x2^0.043966)^2+(x3^0.100031)^2)=0.04;
which is clearly non-linear.
--- Berwin A Turlach
Talarico Massimiliano (UniCredit Xelion Banca) wrote on 07/17/2007 06:00
PM:
Dear all,
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
sqrt((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
x1=0;
Dear all,
I need a suggest to obtain the max of this function:
Max x1*0.021986+x2*0.000964+x3*0.02913
with these conditions:
x1+x2+x3=1;
radq((x1*0.114434)^2+(x2*0.043966)^2+(x3*0.100031)^2)=0.04;
x1=0;
x1=1;
x2=0;
x2=1;
x3=0;
x3=1;
Any suggests ?
Thanks in advanced,
Massimiliano
It is of great help for your advice. Thanks a lot to you all.
livia wrote:
Hi, I would like to minimize the value of x1-x2, x2 is a fixed value of
0.01, x1 is the quantile of normal distribution (0.0032,x) with
probability of 0.7, and the changing value should be x. Initial value for
x is
Hi, I would like to minimize the value of x1-x2, x2 is a fixed value of 0.01,
x1 is the quantile of normal distribution (0.0032,x) with probability of
0.7, and the changing value should be x. Initial value for x is 0.0207. I am
using the following codes, but it does not work.
fr - function(x) {
You don't need optimization for the solution to your problem. You
just need an understanding of the meaning of qnorm() and some simple algebra.
Try: x- (0.01-0.0032)/qnorm(0.7,0,1)
At 12:01 PM 6/18/2007, you wrote:
Hi, I would like to minimize the value of x1-x2, x2 is a fixed value of 0.01,
livia wrote:
Hi, I would like to minimize the value of x1-x2, x2 is a fixed value of 0.01,
x1 is the quantile of normal distribution (0.0032,x) with probability of
0.7, and the changing value should be x. Initial value for x is 0.0207. I am
using the following codes, but it does not work.
On 18-Jun-07 16:01:03, livia wrote:
Hi, I would like to minimize the value of x1-x2, x2 is a fixed
value of 0.01,
x1 is the quantile of normal distribution (0.0032,x) with
probability of 0.7, and the changing value should be x.
Initial value for x is 0.0207.
I'm a bit puzzled by the
From the help page:
Note:
'optim' will work with one-dimensional 'par's, but the default
method does not work well (and will warn). Use 'optimize'
instead.
Next, there is a constraint of x=0 that you are not imposing.
Finally, it is easy to see that qnorm(0.7, 0.0032, x) is
Hi,
my first guess is that the algorithm returns a negative value in some
step - recall that you start from 0.0207!! This negative value is then
passed as standard error to qnorm and that cannot work...
My guess is based on a small experiment where I tried a different
starting point (.02 is so
Dear all,
I would need to maximize a self-defined 'target' function(see below) with
respect to theta, where v follows a log-normal distribution with mean 'mu(x)'
and a constant variance. For each v drawn from its distribution, one maximized
value and optimal theta are produced. I'd like
Good day,
Here I was trying to write a code for Garch(1,1)
. As garch problem is more or less an optimization
problem I also tried to get the algorithm for nlminb
function. What I saw that if use this function
'nlminb I can easyly get the estimate of parameters.
But any other function is not
Optimizing GARCH likelihoods is notoriously difficult.
I suspect that you will find 'nlminb' to be less than perfect,
though it is relatively good. In particular you are likely
to see different behavior depending on whether or not the
data are in percent.
A reference is Winker and Maringer (2006)
@stat.math.ethz.ch
Oggetto: Re: [R] Optimization
Have you considered talking logarithms of the expression you
mentioned:
log(Yield) = a1*log(A)+b1*log(B)+c2*log(C)+...
where a1 = a/(a+b+...), etc. This model has two constraints not present
in ordinary least squares: First, the intercept
: r-help@stat.math.ethz.ch
Oggetto: Re: [R] Optimization
Have you considered talking logarithms of the expression you
mentioned:
log(Yield) = a1*log(A)+b1*log(B)+c2*log(C)+...
where a1 = a/(a+b+...), etc. This model has two constraints not present
in ordinary least squares
Have you considered talking logarithms of the expression you
mentioned:
log(Yield) = a1*log(A)+b1*log(B)+c2*log(C)+...
where a1 = a/(a+b+...), etc. This model has two constraints not present
in ordinary least squares: First, the intercept is assumed to be zero.
Second, the
Dear R-list,
I'm trying to estimate the relative importance of 6 environmental variables
in determining clam yield. To estimate clam yield a previous work used the
function Yield = (A^a*B^b*C^c...)^1/(a+b+c+...) where A,B,C... are the
values of the environmental variables and the weights a,b,c...
Does R have packages for such multi-objectives optimization problems ?
The rgenoud (R-GENetic Optimization Using Derivatives) package
allows for multiple object optimization problems. See the lexical
option which searches for the Pareto front. The package is written
for NP-hard problems (but
Regarding multi-object optimization, I just got 0 hits from
RSiteSearch(multi-objective optimization) and
RSiteSearch(multiobjective optimization). However, it shouldn't be
too difficult to write a wrapper function to blend other functions
however you would like, then use optim or
Le Wed, 01 Mar 2006 13:07:07 -0800, Berton Gunter a écrit :
2) That the mean and sd can be simultaneously optimized as you describe--
what if the subset with maximum mean also has bigger than minimal sd?
Then you have two choices :
1) balance the two objectives with weights, according to the
Dear R community,
I have a dataframe with 500,000 rows and 102 columns. The rows
represent spatial polygons, some of which overlap others (i.e., not
all rows are independent of each other).
Given a particular row, the first column contains a unique RowID.
The second column contains the Variable
PROTECTED] On Behalf Of Mark
Sent: Wednesday, March 01, 2006 12:40 PM
To: r-help@stat.math.ethz.ch
Subject: [R] Optimization problem: selecting independent rows
to maximizethe mean
Dear R community,
I have a dataframe with 500,000 rows and 102 columns. The rows
represent spatial polygons
Package lpSolve might help.
On 3/1/06, Mark [EMAIL PROTECTED] wrote:
Dear R community,
I have a dataframe with 500,000 rows and 102 columns. The rows
represent spatial polygons, some of which overlap others (i.e., not
all rows are independent of each other).
Given a particular row, the
I have to estimate the following model for several
group of observations :
y(1-y) = p[1]*(x^2-y) + p[2]*y*(x-1) + p[3]*(x-y)
with constraints :
p[1]+p[3] = 1
p[1]+p[2]+p[3]+1 = 0
p[3] = 0
I use the following code :
func - sum((y(1-y) - p[1]*(x^2-y) + p[2]*y*(x-1) +
p[3]*(x-y))^2)
estim -
Florent Bresson [EMAIL PROTECTED] writes:
I have to estimate the following model for several
group of observations :
y(1-y) = p[1]*(x^2-y) + p[2]*y*(x-1) + p[3]*(x-y)
with constraints :
p[1]+p[3] = 1
p[1]+p[2]+p[3]+1 = 0
p[3] = 0
I use the following code :
func - sum((y(1-y) -
On Mon, 28 Nov 2005, Florent Bresson wrote:
I have to estimate the following model for several
group of observations :
y(1-y) = p[1]*(x^2-y) + p[2]*y*(x-1) + p[3]*(x-y)
with constraints :
p[1]+p[3] = 1
p[1]+p[2]+p[3]+1 = 0
p[3] = 0
I use the following code :
func - sum((y(1-y) -
If I understand this correctly the variables over which
you are optimizing are p[1], p[2] and p[3] whereas x and y
are fixed and known during the optimization. In that case its
a linear programming problem and you could use the lpSolve
library which would allow the explicit specification of the
Gregory Gentlemen wrote:
Spencer: Thank you for the helpful suggestions. I have another
question following some code I wrote. The function below gives a
crude approximation for the x of interest (that value of x such that
g(x,n) is less than 0 for all n).
# // btilda optimize g(n,x) for
The precision is not a problem, only the display, as Uwe indicated.
Consider the following:
(seq(25.5,25.6,length=20)-25.5)[c(1, 2, 19, 20)]
[1] 0.00e+00 5.25e-07 9.50e-02 1.00e-01
?options
options(digits=20)
seq(25.5,25.6,length=20)[c(1, 2,
Part of the R culture is a statement by Simon Blomberg immortalized
in library(fortunes) as, This is R. There is no if. Only how.
I can't see now how I would automate a complete solution to your
problem in general. However, given a specific g(x, n), I would start by
Spencer: Thank you for the helpful suggestions.
I have another question following some code I wrote. The function below
gives a crude approximation for the x of interest (that value of x such that
g(x,n) is less than 0 for all n).
# // btilda optimize g(n,x) for some fixed x, and then
Im trying to ascertain whether or not the facilities of R are sufficient for
solving an optimization problem I've come accross. Because of my limited
experience with R, I would greatly appreciate some feedback from more frequent
users.
The problem can be delineated as such:
A utility
Thanx Dimitris, Patrick and Berwin!
For other people interested in this problem, here are two valid
solutions that work.
1) Re-parameterize e.g.,
EM - c(100,0,0,0,100,0,0,0,100)
W - array(EM, c(3,3))
d - c(10, 20, 70)
fn - function(x){
x - exp(x) / sum(exp(x))
r -
Dear R-ians,
I want to perform an linear unmixing of image pixels in fractions of
pure endmembers. Therefore I need to perform a constrained linear
least-squares problem that looks like :
min || Cx - d || ² where sum(x) = 1.
I have a 3x3 matrix C, containing the values for endmembers and I have
/
http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm
- Original Message -
From: Stefaan Lhermitte [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Thursday, March 17, 2005 3:13 PM
Subject: [R] Optimization of constrained linear least-squares problem
Dear R-ians,
I want
To new R-Programmers:
This long-winded note gives an example of optimizing in R that should
only be of interest to newcomers to the language. Others should ignore.
My hope is that it might help illuminate some basic notions of code
improvement, looping, and vectorization in R. However, I welcome
Dear All;
I tried to use fitdistr() in the MASS library to fit a mixture
distribution of the 3-parameter Weibull, but the optimization failed.
Looking at the source code, it seems to indicate the error occurs at
if (res$convergence 0)
stop(optimization failed).
I don't think that is the right density: haven't you forgotten I(x a)?
So you need a constraint on a in the optimization, or at least to return
density 0 if a = min(x_i) (but I suspect the mle may well occur at the
boundary).
Without that constraint you don't have a valid optimization problem.
57 matches
Mail list logo