Michael, it should be possible to get nls and integrate to work together.
Bt, there are several problems you need to consider first. The most
important is the definition of e.g. your f-function and how integrate() work.
The easiest way to show you the problem is like in the following.
The
Hi Daniela,
Please read the error message from nls. The problem is with the start
values for the parameters a and b. You haven't specified any, so it uses
default values of a=1 and b=1, which may not be very good. So, you should
specify good start values, if you have a reasonable idea of what
Xiaodong Jin [EMAIL PROTECTED] writes:
y
[1] 1 11 42 64 108 173 214
t
[1] 1 2 3 4 5 6 7
nls(1/y ~ c*exp(-a*b*t)+1/b, start=list(a=0.001,b=250,c=5), trace=TRUE)
29.93322 :0.001 250.000 5.000
Error in numericDeriv(form[[3]], names(ind), env) :
Missing
I used debug to walk through your example line by line, I found
that the error message was misleading. By making
as.vector(semivariance) and as.vector(h) columns of a data.frame, I got
it to work. My revised code appears below.
Thanks for providing a self-contained and
SpG == Spencer Graves [EMAIL PROTECTED]
on Sat, 23 Sep 2006 00:52:30 -0700 writes:
SpG I used debug to walk through your example line by line, I found
SpG that the error message was misleading. By making
SpG as.vector(semivariance) and as.vector(h) columns of a data.frame,
RSiteSearch(grogger) produced nothing, which suggests that the
paper you cite is NOT cited in a help page in any package contributed to
CRAN. However, RSiteSearch(instrumental variables) just produced 31
hits for me, among which #11 was for systemfit{systemfit}, which
mentions
nls not converging for zero-noise cases
Setzer.Woodrow at epamail.epa.gov writes:
No doubt Doug Bates would gladly accept patches ... .
The zero-noise case is irrlevant in practice, but quite often I have uttered
/(!! (vituperation filter on) when nls did not converge with real data. The
Yours truly dieter.menne at menne-biomed.de writes:
...
Recently, a colleague fitted gastric emptying
curves using GraphPad, with 100% success, and
nls failed for one third of these. When we
checked GraphPads output more closely, some of
the coefficients looked like 2.1 with a confidence
Earl F. Glynn efg at stowers-institute.org writes:
It's not clear to me why this problem cannot be fixed somehow. You
You might try optim instead of nls, which always (well, as far I used it)
converges. However, resulting coefficients may be totally off, and you should
use profiling to check
Earl F. Glynn wrote:
Berton Gunter [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Or, maybe there's something I don't understand about the
algorithm being used.
Indeed! So before making such comments, why don't you try to learn about
it?
Doug Bates is a pretty smart guy, and
Joerg van den Hoff [EMAIL PROTECTED] wrote on 08/16/2006
08:22:03 AM:
Earl F. Glynn wrote:
[deleted]
efg
[deleted]
(I think this is recognized by d. bates, but simply way down his 'to
do'
list :-().
joerg
No doubt Doug Bates would gladly accept patches ... .
You problem is x^c for x = 0. If you intended only c 1, try a starting
value meeting that condition (but it seems that the optimal c is about
0.27 is you increase x slightly).
Why have you used ~~ ? (Maybe because despite being asked not to, you
sent HTML mail?)
On Tue, 15 Aug 2006,
Prof Brian Ripley [EMAIL PROTECTED] writes:
You problem is x^c for x = 0. If you intended only c 1, try a starting
value meeting that condition (but it seems that the optimal c is about
0.27 is you increase x slightly).
Surely you mean c 0.
nls(1/y ~ a+b*x^exp(c),
Hi
Why do you want to change your variable values? It smells a rat to
me.
If you just change your a,b,c values nls arrives to some finite
result (e.g. c=1.5 or c=0.3) . BTW by what magic you obtained such
precise and wrong estimates for a,b,c?
HTH
Petr
On 15 Aug 2006 at 5:54, Xiaodong
On Tue, 15 Aug 2006, Peter Dalgaard wrote:
Prof Brian Ripley [EMAIL PROTECTED] writes:
You problem is x^c for x = 0. If you intended only c 1, try a starting
value meeting that condition (but it seems that the optimal c is about
0.27 is you increase x slightly).
Surely you mean c
Earl F. Glynn efg at stowers-institute.org writes:
Here's my toy problem:
?nls.control
?nls
# Method 2
X - 0:15
Y - 9.452 * exp(-0.109*X) + 5.111 # Toy problem
nls.out - nls(Y ~ a*exp(b*X)+c,
+start=list(a=6,b=-0.5,c=1),
+
Dieter Menne [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Earl F. Glynn efg at stowers-institute.org writes:
This toy problem is exactly what the warning is for:
Warning
Do not use nls on artificial zero-residual data.
Add some noise and try again.
Thank you!
I had adapted
Or, maybe there's something I don't understand about the
algorithm being used.
Indeed! So before making such comments, why don't you try to learn about it?
Doug Bates is a pretty smart guy, and I think you do him a disservice when
you assume that he somehow overlooked something that he
Berton Gunter [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]
Or, maybe there's something I don't understand about the
algorithm being used.
Indeed! So before making such comments, why don't you try to learn about
it?
Doug Bates is a pretty smart guy, and I think you do him a
Larsen, Thomas [EMAIL PROTECTED] writes:
I collected eggs laid by Springtails everyday over 28 days after swich to
isotopically enriched diet. The eggs were pooled at day 7, 14, and 28 (+ day
0 = initial value) and analyzed for isotopes. After the diet switch the
isotopic values of the
Your model is over-parametrized: d1*exp(-gt) gives two parameters for one
constant. As a result, the least-square surface is flat in one direction,
and the gradient matrix is singular.
If this is the model you intended, you can simplify it by dropping d1.
It is also partially linear (d) so it
Brian Ripley [EMAIL PROTECTED]
To: Mihai Nica [EMAIL PROTECTED]
CC: r-help@stat.math.ethz.ch
Subject: Re: [R] nls model singular gradient matrix parametrization
Date: Fri, 2 Jun 2006 07:03:02 +0100 (BST)
Your model is over-parametrized: d1*exp(-gt) gives two parameters for one
constant
From ?nls
data: an optional data frame in which to evaluate the variables in
'formula'.
From the printout of 'Temp' it appears 'Temp' is a matrix. Assuming
'temp' is the same as 'Temp', it is not a data frame as required, and the
message is consistent with feeding eval() a
Lorenzo Isella wrote:
Dear All,
I may look ridiculous, but I am puzzled at the behavior of the nls with
a fitting I am currently dealing with.
My data are:
x N
1 346.4102 145.428256
2 447.2136 169.530634
3 570.0877 144.081627
4 721.1103 106.363316
5 894.4272
The data= argument cannot be a matrix. See ?nls
On 5/22/06, H. Paul Benton [EMAIL PROTECTED] wrote:
So thanks for the help,
I have a matrix (AB) which in the first column has my bin numbers so -4 - +4
in 0.1 bin units. Then I have in the second column the frequency from some
data. I have
You have nearly as many parameters as data points which may
cause fundamental problem singularity problems but one thing to
try, just in case, is to transform it to unconstrained. For example,
let
A1 = 0.1 + A1x^2
and then substitute A1 with the right hand side so that it becomes a function
Hi Manuel,
an alternative to the approach pointed out by Prof. Ripley is to use
the package 'drc' which allows one or more parameters in a non-linear
regression model to depend on a factor.
You will need the latest version available at www.bioassay.dk (an older
version is available on CRAN).
Thanks, it was actually p.249, at least in my MASS3.
but that solved my doubt.
I've have another doubt, can this factor interact with
one of the parameters in the model?
My problem is basically a Michaelis Menten term, where
this factor determines a different Km. The rest of the
parameters in
Manuel,
I don't think that it works very easily. Instead, try gnls() in the
nlme package.
Cheers
Andrew
On Thu, Apr 20, 2006 at 11:18:02AM +0200, Manuel Gutierrez wrote:
Is it possible to include a factor in an nls formula?
I've searched the help pages without any luck so I
guess it is not
Thanks Andrew. I am now trying but without much
success. I don't now how to give start values for the
factor?.
Could you give me an example solution with my toy
example?
a-as.factor(c(rep(1,50),rep(0,50)))
independ-1:100
respo-rep(NA,100)
respo[a==1]-(independ[a==1]^2.3)+2
On Thu, 20 Apr 2006, Manuel Gutierrez wrote:
Is it possible to include a factor in an nls formula?
Yes. What do you intend by it? If you mean what it would mean for a lm
formula, you need A[a] and starting values for A.
There's an example on p.219 of MASS4.
I've searched the help pages
Hi Harsh
As indicated in the answer to your first post, it is not so easy
to debug code without the code itself.
1) Have you checked the names of your variables so that you are
sure that they are correct. For example undesired white space
due to the automatic creation of variable names
Wouldn't this be easier?
vals - 1:100
names(vals) - sprintf(beta%d, 1:100)
## or
## names(vals) - paste(beta, 1:100, sep = )
--sundar
Cal Stats wrote:
Hi..
here is an example
ss-NULL
vals-1:100
for(i in 1:100){
ss-c(ss,paste(beta,i,=,vals[i],sep=))
}
Please give us an EXAMPLE of the loop you have in mind. (It's likely that
you can use simpler methods than a loop, but without an example we'd be
guessing.)
Charles Annis, P.E.
[EMAIL PROTECTED]
phone: 561-352-9699
eFax: 614-455-3265
http://www.StatisticalEngineering.com
-Original
Hi..
here is an example
ss-NULL
vals-1:100
for(i in 1:100){
ss-c(ss,paste(beta,i,=,vals[i],sep=))
}
sss-paste(ss,collapse=,)
now is there a way i can convert sss so that i can give the command
nls(formula,start=sss)
Thanks
Harsh
Charles Annis, P.E.
On Tue, 20 Dec 2005, Weijie Cai wrote:
Hi list,
I tried to use nls to do some nonlinear least square fitting on my data with
340 observations and 10 variables, but as I called nls() function, I got
this error message:
Error in qr.qty(QR, resid) : 'qr' and 'y' must have the same number of
Mark,
The parameter of your model (gamma) should not be a part of the dataframe.
In addition, the start argument should be a named list.
Something like this works
nls.dataframe - data.frame(p.kum,felt.prob.kum)
nls.kurve - nls( formula = felt.prob.kum ~
Note that a simple logistic with a saturation level of 1 seems
to do quite well. Below we have removed the last point in order
to avoid the singularity:
x - p.kum[-10]
y - felt.prob.kum[-10]
plot(log(y/(1-y)) ~ x)
abline(lm(log(y/(1-y)) ~ x), col = red)
On 10/31/05, Mark Hempelmann [EMAIL
Use a grid search to get the starting values in which case you
will likely be close enough that you won't run into problems
even without derivatives:
attach(fldgd)
grid - expand.grid(Vr = seq(0,.3,.1), Vm = seq(.45, 1, .05),
alpha = seq(1,2,.25), lamda = seq(1,2,.25))
ss - function(p)
This works if you omit the deriv() step.
Use R's options(error=dump.frames) and debugger(). This gives
Browse[1] rhs
[1] 0.433 0.4272571 0.3994105 0.3594037 0.3270730 0.3104752 0.3000927
[8] 0.2928445 0.2874249 0.2831787
attr(,gradient)
VrVm alphalamda
I have passed the identities and values of fixed parameters via the
... arguments in optim; I don't know about nls. Then internal to the
function that optim is to minimize, I combine the x argument with the
fixed parameters to obtain the full set of parameters. I've used that
On 7/19/05, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
Dear R-helpers,
I am trying to estimate a model that I am proposing, which consists of putting
an extra hidden layer in the Markov switching models. In the simplest case the
S(t) - Markov states - and w(t) - the extra hidden variables -
G'day Chris,
CK == Christfried Kunath [EMAIL PROTECTED] writes:
CK With the nls()-function i want to fit following formula
CK whereas a,b, and c are variables: y~1/(a*x^2+b*x+c)
CK [...]
CK The algorithm plinear give me following error:
The algorithm plinear is inappropriate
On 6/21/05, Christfried Kunath [EMAIL PROTECTED] wrote:
Hello,
i have a problem with the function nls().
This are my data in k:
V1V2
[1,]0 0.367
[2,] 85 0.296
[3,] 122 0.260
[4,] 192 0.244
[5,] 275 0.175
[6,] 421 0.140
[7,] 603 0.093
[8,] 831 0.068
On 6/21/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
On 6/21/05, Christfried Kunath [EMAIL PROTECTED] wrote:
Hello,
i have a problem with the function nls().
This are my data in k:
V1V2
[1,]0 0.367
[2,] 85 0.296
[3,] 122 0.260
[4,] 192 0.244
[5,]
On 6/21/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
On 6/21/05, Gabor Grothendieck [EMAIL PROTECTED] wrote:
On 6/21/05, Christfried Kunath [EMAIL PROTECTED] wrote:
Hello,
i have a problem with the function nls().
This are my data in k:
V1V2
[1,]0 0.367
On Tue, 2005-06-21 at 06:57 -0400, Gabor Grothendieck wrote:
On 6/21/05, Christfried Kunath [EMAIL PROTECTED] wrote:
Hello,
i have a problem with the function nls().
This are my data in k:
V1V2
[1,]0 0.367
[2,] 85 0.296
[3,] 122 0.260
[4,] 192 0.244
On 6/21/05, Manuel Morales [EMAIL PROTECTED] wrote:
On Tue, 2005-06-21 at 06:57 -0400, Gabor Grothendieck wrote:
On 6/21/05, Christfried Kunath [EMAIL PROTECTED] wrote:
Hello,
i have a problem with the function nls().
This are my data in k:
V1V2
[1,]0 0.367
joerg van den hoff wrote:
hi everybody,
is there a canonical way to get hold of the trace=TRUE output from
nls, i.e. to copy it to a R variable (or at least to an external log file)?
I have only found the possibility to fix(nlsModel) (and than the
correct copy of that: namespace function
Anaid Diaz wrote:
Hi,
I'm a new R user, with a lot of questions. At the
moment I'm stoped on an error traying to fit a model:
x - sandeel ## numeric data (2500-6)
y - Noss ## numeric data (0-1.2)
A - 0.8
B - 0.6
C - 1/4
nls( y ~ A-B*exp(-C*x))
Error in match.call(definition, call,
thank you, it worked.
Sylvia
--- Douglas Bates [EMAIL PROTECTED] wrote:
Anaid Diaz wrote:
Hi,
I'm a new R user, with a lot of questions. At the
moment I'm stoped on an error traying to fit a
model:
x - sandeel ## numeric data (2500-6)
y - Noss ## numeric data (0-1.2)
A -
This is how I'd write the formula for use with nls/nlme:
y ~ b41*(x - 1) + b42*(x^2 - 1) +
ifelse((a41 - x) = 0, b43*(a41 - x)^2, 0) +
ifelse((a42 - x) = 0, b44*(a42 - x)^2, 0)
This is a direct translation from your funny foreign-looking code below
that probably makes it clear what's going
Thank you very much Dr. Venables, I'll give this a try.
Regards-
andy
On Sun, 2005-04-17 at 13:36 +1000, [EMAIL PROTECTED] wrote:
This is how I'd write the formula for use with nls/nlme:
y ~ b41*(x - 1) + b42*(x^2 - 1) +
ifelse((a41 - x) = 0, b43*(a41 - x)^2, 0) +
ifelse((a42 - x) = 0,
Many people could help you, but the question is too general. In
brief, it means that the algorithm has found a place where the
(estimated?) matrix of first or second partial derivatives is of reduced
rank, and it refuses to do more. For such problems, I often use optim.
If you
[EMAIL PROTECTED] wrote:
Dear list,
I do have a problem with nls. I use the following data:
test
time conc dose
0.50 5.401
0.75 11.101
1.00 8.401
1.25 13.801
1.50 15.501
1.75 18.001
2.00 17.001
2.50 13.901
3.00 11.201
3.50
Hi, Doug:
How would you diagnose something like this? For example, might
the following (from ?nlsModel) help:
DNase1 - DNase[ DNase$Run == 1, ]
mod -
nlsModel(density ~ SSlogis( log(conc), Asym, xmid, scal ),
DNase1, start=list( Asym = 3, xmid = 0, scal = 1 ))
Mike
nlsList from nlme library can fit nonlinear models for dataset grouped by
some specification, e.g. by specie in your case
Regards
Christian
-- Mensaje Original --
From: Mike Saunders [EMAIL PROTECTED]
To: R Help [EMAIL PROTECTED]
Date: Thu, 16 Dec 2004 10:40:00 -0500
Subject: [R] nls
Yang, Richard wrote:
Dear R-helpers;
Using nls() to fit a function,Rdum, defined below I stumbled on an
error: Error in eval(expr, envir, enclos): Object s0 not found.
The function Rdum is defined as
Rdum - deriv(~ h1 * (s0 + sl0*sl + sm0*sm + sp01*sp1 + sp02*sp2 +
Have a look at optim (which supports a number of different algorithms via
the method= arg) and segmented in package segmented which does segmented
regression.
For example,
ss - function(par) {
b - par[1]; c1 - par[2]; c2 - par[3]; d - par[4]
x - df1$x; y - df1$y
sum((y -
Often when nls doesn't converge there is a good reason for it.
I'm on a very slow internet connection these days and will not be able
to look at the data myself but I ask you to bear in mind that, when
dealing with nonlinear models, there are model/data set combinations for
which there are no
Douglas Bates [EMAIL PROTECTED] writes:
Often when nls doesn't converge there is a good reason for it.
I'm on a very slow internet connection these days and will not be able
to look at the data myself but I ask you to bear in mind that, when
dealing with nonlinear models, there are
On Thu, 10 Jun 2004, joerg van den hoff wrote:
I apologize for posting this in essence the second time (no light at the
end of the tunnel yet..):
is there a way to enforce that nls takes both, the data *and* the
model definition from the parent environment? the following fragment
shows
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
Around R 1.2.x the notion was introduced that variables should be looked
for in the environment of a formula. Functions using model.frame got
converted to do that, but nls did not. I guess that the best way forward
is to ensure that nls (and
Prof Brian Ripley wrote:
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
Around R 1.2.x the notion was introduced that variables should be looked
for in the environment of a formula. Functions using model.frame got
converted to do that, but nls did not. I guess that the best way forward
is
On Thu, 10 Jun 2004, joerg van den hoff wrote:
Prof Brian Ripley wrote:
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
Around R 1.2.x the notion was introduced that variables should be looked
for in the environment of a formula. Functions using model.frame got
converted to do
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
On Thu, 10 Jun 2004, Prof Brian Ripley wrote:
Around R 1.2.x the notion was introduced that variables should be looked
for in the environment of a formula. Functions using model.frame got
converted to do that, but nls did not. I guess
Hi Bill,
I've just spent a few months trying to fit a model to a dataset, and it's not
easy. However, in my case, what appears to be recalcitrance on R's part
actually turns out to be well-founded warnings that the structure of the
model and the data are not permitting a clean, unambiguous
Bernardo Rangel Tura [EMAIL PROTECTED] writes:
I have a problem with nls() and my research data. Look this example:
X2000-c(1.205268,2.850695,5.100860,8.571610,15.324513,25.468599,39.623418,61.798856,91.470006,175.152509)
age-c(37,42,47,52,57,62,67,72,77,82)
fit -
You suggest the solution yourself: transform the equation to have all
parameters at the right, thus:
y ~ ((b0 + b1 * x) * t + 1) ^ 1/t
Bit this is still not correct, since the transformation changes
the scale of the variance, and lesat squares will not be correct.
There is needed a factor
Dear r-help members
I posted this message already yesterday, but don't know whether it
reached you since I joined the group only yesterday.
I would like to estimate the boxcox transformed model
(y^t - 1)/t ~ b0 + b1 * x.
Unfortunately, R returns with an error message when I try to
perform this
On Thu, 20 Nov 2003, Philippe Grosjean wrote:
Dear r-help members
I posted this message already yesterday, but don't know whether it
reached you since I joined the group only yesterday.
I would like to estimate the boxcox transformed model
(y^t - 1)/t ~ b0 + b1 * x.
Unfortunately, R
On Thu, 20 Nov 2003, Prof Brian Ripley wrote:
Now nlrq uses a different criterion and Philippe's suggestion may work
there. I can't tell quickly: the help page does not say what the
criterion is. But if those are the same, then I suspect the criterion is
uninteresting as a way to
On 20 Nov 2003 at 15:24, Philippe Grosjean wrote:
Dear r-help members
I posted this message already yesterday, but don't know whether it
reached you since I joined the group only yesterday. I would like to
estimate the boxcox transformed model
(y^t - 1)/t ~ b0 + b1 * x.
Unfortunately, R
optim() needs starting values too, but *may* be more robust than nls to
their specification
On Fri, 24 Oct 2003, Giovanni Caggiano wrote:
I am trying to fit to the data a nonlinear function, say z(tau;b) where tau
is the independent variable and b is 3x1 vector of parameters. Following
Please use an informative subject line. The r-help archives at
www.r-project.org - search - R site search indexes that, and I find
answers to today's problems in the r-help discussions of yesterday's
questions.
Only yesterday, I was got essentially that error message. I solved
it by
giovanni caggiano [EMAIL PROTECTED] writes:
A couple of questions about the nls package.
1. I'm trying to run a nonlinear least squares
regression but the routine gives me the following
error message:
step factor 0.000488281 reduced below `minFactor' of
0.000976563
even though I
I agree with what you said about using trace = TRUE when you are
having trouble getting nls to converge. This allows you to see what
is happening to the parameters during the iterations and that it is
often quite instructive; as is plotting your data and thinking about
whether you should expect
In my experience, transformations of the type Doug just described has
often made sums of squares (or log(likelihood)) contours more parabolic,
thereby increasing the accuracy of the simple normal approximations to
the distributions of parameter estimates. It is wise to check these
things, as
On Thu, 14 Aug 2003 09:08:26 -0700, Spencer Graves
[EMAIL PROTECTED] wrote :
This seems to identify a possible bug in R 1.7.1 under Windows 2000:
tstDf - data.frame(y = 1:11, x=1:11)
fit - nls(y~a/x, data=tstDf, start=list(a=1))
predict(fit, se.fit=TRUE)
[1] 7.0601879 3.5300939 2.3533960
On Thu, 14 Aug 2003, Spencer Graves wrote:
This seems to identify a possible bug in R 1.7.1 under Windows 2000:
tstDf - data.frame(y = 1:11, x=1:11)
fit - nls(y~a/x, data=tstDf, start=list(a=1))
predict(fit, se.fit=TRUE)
[1] 7.0601879 3.5300939 2.3533960 1.7650470 1.4120376
You can use the well-known Taylor series approximation to the
variance of an arbitrary function:
Var( f(X) ) ~= Sum( s[i]^2*D2[i] ) + 2*Sum( Sum( s[i,j]*D[i]*D[j] ) )
where D2[i] is the second partial derivative of f(x) with respect
to the ith parameter and D[j] is the first partial derivative
This seems to identify a possible bug in R 1.7.1 under Windows 2000:
tstDf - data.frame(y = 1:11, x=1:11)
fit - nls(y~a/x, data=tstDf, start=list(a=1))
predict(fit, se.fit=TRUE)
[1] 7.0601879 3.5300939 2.3533960 1.7650470 1.4120376 1.1766980 1.0085983
[8] 0.8825235 0.7844653 0.7060188
Regarding the accuracy of the Taylor series approximation, my favorite
reference is Bates Watts (1988) Nonlinear Regression Analysis and Its
Applications (Wiley, esp. pp. 255-260). Recently, Brian Ripley also
Chambers Hastie (1992) Statistical Models in S (Wadsworth, ch. 10) and
Venables
On Thu, 14 Aug 2003 12:43:25 -0400, Duncan Murdoch [EMAIL PROTECTED]
wrote :
Perhaps the description below of what se.fit is supposed to do should
be modified.
I've done that now in the development version (to become 1.8.0).
Err, I mean in the patch version (but it should still end up in
On Thu, 14 Aug 2003 12:43:25 -0400, Duncan Murdoch [EMAIL PROTECTED]
wrote :
Perhaps the description below of what se.fit is supposed to do should
be modified.
I've done that now in the development version (to become 1.8.0).
Duncan Murdoch
__
[EMAIL
Komanduru Sai C [EMAIL PROTECTED] writes:
Hi,
I am using nls library
df - read.table(data.txt, header=T);
library(nls);
fm - nls(y ~ a*(x+d)^(-b), df, start=list(a=max(df-y,na.rm=T)/2,b=1,d=0));
coef(fm);
q();
1) Are you sure you meant
On 16 Jan 2003 at 22:15, Yan Yu wrote:
HI,
i have some prob when i try to use nls().
my data is 1D vector, I tried to use a polynomial function(order is 3) to
fit it.
the data series is stored in x.
the a0, a1, a2, a3 below is coefficient, which i hope i can get from
calls nls
z -
kjetil brinchmann halvorsen [EMAIL PROTECTED] writes:
On 16 Jan 2003 at 22:15, Yan Yu wrote:
HI,
i have some prob when i try to use nls().
my data is 1D vector, I tried to use a polynomial function(order is 3) to
fit it.
the data series is stored in x.
the a0, a1, a2, a3 below is
88 matches
Mail list logo