[R] MLE Function

2007-09-10 Thread Terence Broderick
I am just trying to teach myself how to use the mle function in R because it is 
much better than what is provided in MATLAB. I am following tutorial material 
from the internet, however, it gives the following errors, does anybody know 
what is happening to cause such errors, or does anybody know any better 
tutorial material on this particular subject.
  
 x.gam-rgamma(200,rate=0.5,shape=3.5)
 x-x.gam
 library(stats4)
 ll-function(lambda,alfa){n-200;x-x.gam 
 -n*alfa*log(lambda)+n*log(gamma(alfa))-9alfa-1)*sum(log(x))+lambda*sum(x)}
Error: syntax error, unexpected SYMBOL, expecting '\n' or ';' or '}' in 
ll-function(lambda,alfa){n-200;x-x.gam 
-n*alfa*log(lambda)+n*log(gamma(alfa))-9alfa
 ll-function(lambda,alfa){n-200;x-x.gam 
 -n*alfa*log(lambda)+n*log(gamma(alfa))-(alfa-1)*sum(log(x))+lambda*sum(x)}
 est-mle(minuslog=ll,start=list(lambda=2,alfa=1))
Error in optim(start, f, method = method, hessian = TRUE, ...) : 
objective function in optim evaluates to length 200 not 1

   
   


audaces fortuna iuvat
   
-

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE Function

2007-09-10 Thread Peter Dalgaard
Terence Broderick wrote:
 I am just trying to teach myself how to use the mle function in R because it 
 is much better than what is provided in MATLAB. I am following tutorial 
 material from the internet, however, it gives the following errors, does 
 anybody know what is happening to cause such errors, or does anybody know any 
 better tutorial material on this particular subject.
   
   
 x.gam-rgamma(200,rate=0.5,shape=3.5)
 x-x.gam
 library(stats4)
 ll-function(lambda,alfa){n-200;x-x.gam 
 -n*alfa*log(lambda)+n*log(gamma(alfa))-9alfa-1)*sum(log(x))+lambda*sum(x)}
 
 Error: syntax error, unexpected SYMBOL, expecting '\n' or ';' or '}' in 
 ll-function(lambda,alfa){n-200;x-x.gam 
 -n*alfa*log(lambda)+n*log(gamma(alfa))-9alfa
   
 ll-function(lambda,alfa){n-200;x-x.gam 
 -n*alfa*log(lambda)+n*log(gamma(alfa))-(alfa-1)*sum(log(x))+lambda*sum(x)}
 est-mle(minuslog=ll,start=list(lambda=2,alfa=1))
 
 Error in optim(start, f, method = method, hessian = TRUE, ...) : 
 objective function in optim evaluates to length 200 not 1


   
Er, not what I get. Did your version have that linefeed after x - x.gam 
? If not, then you'll get your negative log-likelihood added to x.gam 
and the resulting likelihood becomes a vector of length 200 instead of 
a scalar.

In general, the first piece of advice for mle() is to check that the 
likelihood function really is what it should be. Otherwise there is no 
telling what the result might mean...

Secondly, watch out for parameter constraints. With your function, it 
very easily happens that alfa tries to go negative in which case the 
gamma function in the likelihood will do crazy things.
A common trick in such cases is to reparametrize by log-parameters, i.e.

ll - function(lambda,alfa){n-200; x-x.gam
-n*alfa*log(lambda)+n*lgamma(alfa)-(alfa-1)*sum(log(x))+lambda*sum(x)}

ll2 - function(llam, lalf) ll(exp(llam),exp(lalf))
est - mle(minuslog=ll2,start=list(llam=log(2),lalf=log(1)))

par(mfrow=c(2,1))
plot(profile(est))

Notice, incidentally, the use of lgamma rather than log(gamma(.)), which 
is prone to overflow.

In fact, you could also write this likelihood directly  as

-sum(dgamma(x, rate=lambda, shape=alfa, log=T))





 audaces fortuna iuvat

 -

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
   


-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MLE for Student's t-distribution

2006-11-15 Thread Benjamin Dickgiesser
Hi
is there an easy way/ R-function to calculate the numerical maximum
likelihood estimators for a Student's t-distribution?
I searched the mailing list archive the last 30mins but didn't find an answer.

Regards
Ben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE for Student's t-distribution

2006-11-15 Thread Prof Brian Ripley
On Wed, 15 Nov 2006, Benjamin Dickgiesser wrote:

 Hi
 is there an easy way/ R-function to calculate the numerical maximum
 likelihood estimators for a Student's t-distribution?
 I searched the mailing list archive the last 30mins but didn't find an answer.

See fitdistr() in MASS.

MLE of what, BTW?  If you want the MLE of 'df', fitdistr will give it to 
you but beware that its statistical properties are surprising.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE for Student's t-distribution

2006-11-15 Thread Benjamin Dickgiesser
I need to estimate all parameters (except maybe df). Thank you for
pointing me into a direction, I will have a look.
The aim is to use a fat-tail distribution to calculate the Value At
Risk instead of using the Normal distribution.

Ben

On 11/15/06, Prof Brian Ripley [EMAIL PROTECTED] wrote:
 On Wed, 15 Nov 2006, Benjamin Dickgiesser wrote:

  Hi
  is there an easy way/ R-function to calculate the numerical maximum
  likelihood estimators for a Student's t-distribution?
  I searched the mailing list archive the last 30mins but didn't find an 
  answer.

 See fitdistr() in MASS.

 MLE of what, BTW?  If you want the MLE of 'df', fitdistr will give it to
 you but beware that its statistical properties are surprising.

 --
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] MLE Methods

2006-10-16 Thread nand kumar
  Greetings Forum,
   
  I am new to R and and writing in hopes of getting some help.
  Our MLE results from a home grown software do not match with that of R. We 
are using a censored sample and will really appreciate if you could give us any 
pointers as to which MLE method is used in R... to my knowledge there are 
different flavors of MLE used. 
   
  Thanks in Advance...



-
Get your email and more, right on the  new Yahoo.com 
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE Methods

2006-10-16 Thread Ben Bolker
nand kumar nksathyar at yahoo.com writes:

 
   Greetings Forum,
 
   I am new to R and and writing in hopes of getting some help.
   Our MLE results from a home grown software do not match with that of R. We
are using a censored sample and will
 really appreciate if you could give us any pointers as to which MLE method is
used in R... to my knowledge
 there are different flavors of MLE used. 
 

  We need more information.  There are a huge variety of functions
in R that, in one way or another, calculate maximum likelihood estimates
(e.g. lm, glm, survReg, lmer, ...) using many different numerical
algorithms appropriate for the specific problem.  As the posting guide
says, please construct the simplest possible reproducible example,
showing how you estimate the MLE in R,
and explain something about how your home grown software works;
otherwise there is no way to answer your question.

  Ben Bolker

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] MLE and QR classes

2006-07-16 Thread Spencer Graves
  I don't understand your question.  First, I'm not familiar with the 
'wls' package;  I found no such package by that name via 
www.r-project.org - CRAN - (select a local mirror) - Packages.

  The 'qr' function in the 'quandreg' package looks straightforward to 
me.  Have you worked through the examples in the 'qr' help page?  The 
look to me like they follow the standard syntax of 'lm'.  If you don't 
understand the 'lm' syntax, I encourage you to spend some quality time 
with appropriate sections of Venables and Ripley (2003) Modern Applied 
Statistics with S, 4th ed. (Springer).

  If you'd like more help from this listserve, please submit another 
question.  However, please include a simple, self-contained example of 
something you tried to help illustrate your question (as suggested in 
the posting guide! www.R-project.org/posting-guide.html).

  Hope this helps.
  Spencer Graves

[EMAIL PROTECTED] wrote:
 Hi,
 
 I load my data set and separate it as folowing:
 
 presu - read.table(C:/_Ricardo/Paty/qtdata_f.txt, header=TRUE, sep=\t,
 na.strings=NA, dec=., strip.white=TRUE)
 dep-presu[,3];
 exo-presu[,4:92];
 
 Now, I want to use it using the wls and quantreg packages. How I change the
 data classes for mle and rq objects?
 
 Thanks a lot,
 
 Ricardo Gonçalves Silva, M. Sc.
 Apoio aos Processos de Modelagem Matemática
 Econometria  Inadimplência
 Serasa S.A.
 (11) - 6847-8889
 [EMAIL PROTECTED]
 
 **
 
 As informações contidas nesta mensagem e no(s) arquivo(s) anexo(s) são
 endereçadas exclusivamente à(s) pessoa(s) e/ou instituição(ões) acima
 indicada(s), podendo conter dados confidenciais, os quais não podem, sob
 qualquer forma ou pretexto, ser utilizados, divulgados, alterados,
 impressos ou copiados, total ou parcialmente, por pessoas não autorizadas.
 Caso não seja o destinatário, favor providenciar sua exclusão e notificar o
 remetente imediatamente.  O uso impróprio será tratado conforme as normas
 da empresa e da legislação em vigor.
 Esta mensagem expressa o posicionamento pessoal do subscritor e não reflete
 necessariamente a opinião da Serasa.
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE and QR classes

2006-07-14 Thread ricardosilva
Hi,

I load my data set and separate it as folowing:

presu - read.table(C:/_Ricardo/Paty/qtdata_f.txt, header=TRUE, sep=\t,
na.strings=NA, dec=., strip.white=TRUE)
dep-presu[,3];
exo-presu[,4:92];

Now, I want to use it using the wls and quantreg packages. How I change the
data classes for mle and rq objects?

Thanks a lot,

Ricardo Gonçalves Silva

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE and QR classes

2006-07-13 Thread ricardosilva
Hi,

I load my data set and separate it as folowing:

presu - read.table(C:/_Ricardo/Paty/qtdata_f.txt, header=TRUE, sep=\t,
na.strings=NA, dec=., strip.white=TRUE)
dep-presu[,3];
exo-presu[,4:92];

Now, I want to use it using the wls and quantreg packages. How I change the
data classes for mle and rq objects?

Thanks a lot,

Ricardo Gonçalves Silva, M. Sc.
Apoio aos Processos de Modelagem Matemática
Econometria  Inadimplência
Serasa S.A.
(11) - 6847-8889
[EMAIL PROTECTED]

**

As informações contidas nesta mensagem e no(s) arquivo(s) anexo(s) são
endereçadas exclusivamente à(s) pessoa(s) e/ou instituição(ões) acima
indicada(s), podendo conter dados confidenciais, os quais não podem, sob
qualquer forma ou pretexto, ser utilizados, divulgados, alterados,
impressos ou copiados, total ou parcialmente, por pessoas não autorizadas.
Caso não seja o destinatário, favor providenciar sua exclusão e notificar o
remetente imediatamente.  O uso impróprio será tratado conforme as normas
da empresa e da legislação em vigor.
Esta mensagem expressa o posicionamento pessoal do subscritor e não reflete
necessariamente a opinião da Serasa.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE maximum number of parameters

2006-06-19 Thread Federico Calboli
Hi All,

I would like to know, is there a *ballpark* figure for how many  
parameters the minimisation routines can cope with?

I'm asking because I was asked if I knew.

Cheers,

Federico

--
Federico C. F. Calboli
Department of Epidemiology and Public Health
Imperial College, St. Mary's Campus
Norfolk Place, London W2 1PG

Tel +44 (0)20 75941602   Fax +44 (0)20 75943193

f.calboli [.a.t] imperial.ac.uk
f.calboli [.a.t] gmail.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE maximum number of parameters

2006-06-19 Thread Ben Bolker
Federico Calboli f.calboli at imperial.ac.uk writes:

 
 Hi All,
 
 I would like to know, is there a *ballpark* figure for how many  
 parameters the minimisation routines can cope with?
 

  I think I would make a distinction between theoretical
and practical limits.  A lot depends on how fast your
objective function is, but I would say in general that
if your objective function has more than a few tens
of parameters you should probably be looking for some
more specialized optimization code (e.g. AD Model Builder,
finding a way to shoehorn it into existing routines
such as nlme/lmer, etc.).
   I would be happy to be corrected though.

  Ben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE maximum number of parameters

2006-06-19 Thread Roger D. Peng
It really depends on how well-behaved your objective function is, but I've been 
able to fit a few models with 10--15 parameters.  But I felt like I was 
stretching the limit there.

-roger

Federico Calboli wrote:
 Hi All,
 
 I would like to know, is there a *ballpark* figure for how many  
 parameters the minimisation routines can cope with?
 
 I'm asking because I was asked if I knew.
 
 Cheers,
 
 Federico
 
 --
 Federico C. F. Calboli
 Department of Epidemiology and Public Health
 Imperial College, St. Mary's Campus
 Norfolk Place, London W2 1PG
 
 Tel +44 (0)20 75941602   Fax +44 (0)20 75943193
 
 f.calboli [.a.t] imperial.ac.uk
 f.calboli [.a.t] gmail.com
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
Roger D. Peng  |  http://www.biostat.jhsph.edu/~rpeng/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE maximum number of parameters

2006-06-19 Thread Spencer Graves
  Applications with lots of parameters also tend to have parameters in 
a relatively small number of families, and each of these few families 
could be considered to have a distribution.  Splines, for example, have 
lots of parameters -- sometimes more parameters than observations (as do 
neural nets and other data mining techniques).  Spline estimation 
virtually always incorporates some kind of smoothness penalty.  The 
smoothness penalty is in essence an attempt to eat the Bayesian omelet 
without breaking the Bayesian egg.(1)

  In such situations, I believe it's wiser to embrace Bayes at least to 
the extent of using something like lme, nlme or lmer.  This 
redefines the problem in terms of estimating a small number of 
hyperparameters (using Frequentist methods with the 'nlme' and 'lme4' 
packages) and getting individual estimates of the larger number of 
random parameters conditional on the estimates of the hyperparameters.

  Hope this helps.
  Spencer Graves

(1) I don't know where I heard the phrase eating the Bayesian omelet 
without breaking the Bayesian egg, but Google found it in the 
following: 
http://philosophy.elte.hu/colloquium/2004/May-June/classicalstats.pdf;.

Roger D. Peng wrote:
 It really depends on how well-behaved your objective function is, but I've 
 been 
 able to fit a few models with 10--15 parameters.  But I felt like I was 
 stretching the limit there.
 
 -roger
 
 Federico Calboli wrote:
 Hi All,

 I would like to know, is there a *ballpark* figure for how many  
 parameters the minimisation routines can cope with?

 I'm asking because I was asked if I knew.

 Cheers,

 Federico

 --
 Federico C. F. Calboli
 Department of Epidemiology and Public Health
 Imperial College, St. Mary's Campus
 Norfolk Place, London W2 1PG

 Tel +44 (0)20 75941602   Fax +44 (0)20 75943193

 f.calboli [.a.t] imperial.ac.uk
 f.calboli [.a.t] gmail.com

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE maximum number of parameters

2006-06-19 Thread Patrick Burns
Seagulls have a very different perspective to ballparks
than ants.  Nonetheless, there is something that can be
said.

There are several variables in addition to the number of
parameters that are important.  These include:

* The complexity of the likelihood

* The number of observations in the dataset

* How close to the optimum is close enough

* Your patience

The latter is undoubtedly the most important of all.  It
matters a lot whether you think a minute is a long time
or only periods measured in weeks.

The optimization strategy can also have a big effect.  If
you are using a derivative-based optimizer, then the number
of parameters can have a big impact.  Typically one iteration
in such algorithms requires p+1 function calls, where p is the
number of parameters.  Since more iterations are generally
required with more parameters, the speed can decrease
rapidly as the number of parameters increases.

One strategy to deal with a large number of parameters is to
start with something like a genetic algorithm.  Once the genetic
algorithm has a pretty good solution, then switch to a derivative-
based algorithm to finish.  The amount to run the initial
algorithm before switching depends on the problem, the quality
of the two optimizers, and probably other things.

With this switching strategy and at least a modicum of patience,
problems with thousands of parameters may be feasible to solve.

Patrick Burns
[EMAIL PROTECTED]
+44 (0)20 8525 0696
http://www.burns-stat.com
(home of S Poetry and A Guide for the Unwilling S User)

Federico Calboli wrote:

Hi All,

I would like to know, is there a *ballpark* figure for how many  
parameters the minimisation routines can cope with?

I'm asking because I was asked if I knew.

Cheers,

Federico

--
Federico C. F. Calboli
Department of Epidemiology and Public Health
Imperial College, St. Mary's Campus
Norfolk Place, London W2 1PG

Tel +44 (0)20 75941602   Fax +44 (0)20 75943193

f.calboli [.a.t] imperial.ac.uk
f.calboli [.a.t] gmail.com

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


  


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE maximum number of parameters

2006-06-19 Thread Albyn Jones
I regularly optimize functions of over 1000 parameters for posterior
mode computations using a variant of newton-raphson.  I have some
favorable conditions: the prior is pretty good, the posterior is
smooth, and I can compute the gradient and hessian.

albyn


On Mon, Jun 19, 2006 at 06:53:00PM +0100, Patrick Burns wrote:
 Seagulls have a very different perspective to ballparks
 than ants.  Nonetheless, there is something that can be
 said.
 
 There are several variables in addition to the number of
 parameters that are important.  These include:
 
 * The complexity of the likelihood
 
 * The number of observations in the dataset
 
 * How close to the optimum is close enough
 
 * Your patience
 
 The latter is undoubtedly the most important of all.  It
 matters a lot whether you think a minute is a long time
 or only periods measured in weeks.
 
 The optimization strategy can also have a big effect.  If
 you are using a derivative-based optimizer, then the number
 of parameters can have a big impact.  Typically one iteration
 in such algorithms requires p+1 function calls, where p is the
 number of parameters.  Since more iterations are generally
 required with more parameters, the speed can decrease
 rapidly as the number of parameters increases.
 
 One strategy to deal with a large number of parameters is to
 start with something like a genetic algorithm.  Once the genetic
 algorithm has a pretty good solution, then switch to a derivative-
 based algorithm to finish.  The amount to run the initial
 algorithm before switching depends on the problem, the quality
 of the two optimizers, and probably other things.
 
 With this switching strategy and at least a modicum of patience,
 problems with thousands of parameters may be feasible to solve.
 
 Patrick Burns
 [EMAIL PROTECTED]
 +44 (0)20 8525 0696
 http://www.burns-stat.com
 (home of S Poetry and A Guide for the Unwilling S User)
 
 Federico Calboli wrote:
 
 Hi All,
 
 I would like to know, is there a *ballpark* figure for how many  
 parameters the minimisation routines can cope with?
 
 I'm asking because I was asked if I knew.
 
 Cheers,
 
 Federico
 
 --
 Federico C. F. Calboli
 Department of Epidemiology and Public Health
 Imperial College, St. Mary's Campus
 Norfolk Place, London W2 1PG
 
 Tel +44 (0)20 75941602   Fax +44 (0)20 75943193
 
 f.calboli [.a.t] imperial.ac.uk
 f.calboli [.a.t] gmail.com
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 
   
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mle package

2006-05-02 Thread Alexander Nervedi
Hi !

There used to be a package called mle for maximum likelihood estimation. I 
couldn't find it when I tried to get the package.  Is this still available? 
Perhaps under another package?

I'd appreciate any suggestion on this.

Alex

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] mle package

2006-05-02 Thread Dimitris Rizopoulos
probably you're looking for function mle() in package stats4.

I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm


- Original Message - 
From: Alexander Nervedi [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Tuesday, May 02, 2006 9:20 AM
Subject: [R] mle package


 Hi !

 There used to be a package called mle for maximum likelihood 
 estimation. I
 couldn't find it when I tried to get the package.  Is this still 
 available?
 Perhaps under another package?

 I'd appreciate any suggestion on this.

 Alex

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE

2006-04-18 Thread Ben Bolker
 vkatoma at cs.uct.ac.za writes:

 
 
 Hi ,
 
 I want to compute the MLE for a simple sample of data, say
 45,26,98,65,25,36,42,62,28,36,15,48,45, of which I obviously have the mean
 and the sd. Is there a way of calling the log normal and already
 diffrentiated formula other than entering the whole formula.
 
 Victor
 

  You haven't given us enough information/described your problem
clearly enough.  The maximum likelihood
estimate of _what_?  fitdistr() in the MASS package might help.
If you want (log)likelihoods dlnorm (for log normal), dnorm
might help.
   
  Ben Bolker

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE

2006-04-17 Thread vkatoma

Hi ,

I want to compute the MLE for a simple sample of data, say
45,26,98,65,25,36,42,62,28,36,15,48,45, of which I obviously have the mean
and the sd. Is there a way of calling the log normal and already
diffrentiated formula other than entering the whole formula.

Victor

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mle.

2006-03-01 Thread Arun Kumar Saha
hi all,

suppose I have:

r[i] ~ N(0, h[i])
h[i] = a + b*r[i-1] + c*h[i-1] for all i=2..n

I want to get estimates of a, b, c by mle.

Can you tell me how to do that?


thanks in advance,
Arun

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] mle.

2006-03-01 Thread Ben Bolker
Arun Kumar Saha arun.kumar.saha at gmail.com writes:

 
 hi all,
 
 suppose I have:
 
 r[i] ~ N(0, h[i])
 h[i] = a + b*r[i-1] + c*h[i-1] for all i=2..n
 
 I want to get estimates of a, b, c by mle.
 
 Can you tell me how to do that?
 

  (1) can you convince us this isn't a homework problem?
(a little bit more context would be nice in any case)
  (2) are both r and h observed? if so, then
your (negative log-)likelihood function can use
something along the lines of 
-sum(dnorm(r,mean=...,sd=...,log=FALSE))-sum(dnorm(h,mean=...,sd=...,log=FALSE))
where you fill in the predicted mean and std. dev. of each.

  Ben Bolker

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE

2006-01-15 Thread gynmeerut
Dear All,
Can somebody tell me how to do Maximum Likelihood Estimation in R 
for Non-linear function?
My function is non-linear and it has four parameters, only one explanatory 
variable.
If possible Please tell me the source so that I can write my own code for above.


Thanks,

GS

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE with optim

2005-06-29 Thread Carsten Steinhoff
Hello,
 
I tried to fit a lognormal distribution by using optim. But sadly the output
seems to be incorrect.
Who can tell me where the bug is?
 
test = rlnorm(100,5,3)
logL= function(parm, x,...) -sum(log(dlnorm(x,parm,...)))
start= list(meanlog=5, sdlog=3)
optim(start,logL,x=test)$par
 
Carsten.

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE with optim

2005-06-29 Thread Sundar Dorai-Raj


Carsten Steinhoff wrote:
 Hello,
  
 I tried to fit a lognormal distribution by using optim. But sadly the output
 seems to be incorrect.
 Who can tell me where the bug is?
  
 test = rlnorm(100,5,3)
 logL= function(parm, x,...) -sum(log(dlnorm(x,parm,...)))
 start= list(meanlog=5, sdlog=3)
 optim(start,logL,x=test)$par
  
 Carsten.
 

You are only supplying the meanlog argument to dlnorm. Also you can set 
log = TRUE in dlnorm to avoid the log computation after calling dlnorm.

set.seed(1)
x - rlnorm(100, 5, 3)
logL - function(par, x) -sum(dlnorm(x, par[1], par[2], TRUE))
start - list(meanlog = 5, sdlog = 2)
optim(start, logL, x = x)

Why not just use MASS::fitdistr instead?

library(MASS)
fitdistr(x, dlnorm, start)

HTH,

--sundar

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE with optim

2005-06-29 Thread Dimitris Rizopoulos
the following work for me:


x - rlnorm(1000, 5, 3)

fn - function(parms, dat) -sum(dlnorm(dat, parms[1], parms[2], log = 
TRUE))
optim(c(5, 3), fn, dat = x)

library(MASS)
fitdistr(x, log-normal, list(meanlog = 5,  sdlog = 3))


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/16/336899
Fax: +32/16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.ac.be/~m0390867/dimitris.htm


- Original Message - 
From: Carsten Steinhoff [EMAIL PROTECTED]
To: r-help@stat.math.ethz.ch
Sent: Wednesday, June 29, 2005 5:19 PM
Subject: [R] MLE with optim


 Hello,

 I tried to fit a lognormal distribution by using optim. But sadly 
 the output
 seems to be incorrect.
 Who can tell me where the bug is?

 test = rlnorm(100,5,3)
 logL= function(parm, x,...) -sum(log(dlnorm(x,parm,...)))
 start= list(meanlog=5, sdlog=3)
 optim(start,logL,x=test)$par

 Carsten.

 [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE for two random variables

2005-03-12 Thread Carsten Steinhoff
Hello,

I've the following setting:

(1) Data from a source without truncation   (x)

(2) Data from an other source with left-truncation at threshold u   (xu)

I have to fit a model on these these two sources, thereby I assume that both
are drawn from the same distribution (eg lognormal). In a MLE I would sum
the densities and maximize. The R-Function could be:

function(x,xu,u,mu,sigma)
dlnorm(x,mu,sigma)+(dlnorm(xu,mu,sigma)/(plnorm(u,mu,sigma)))

So I tried to use the function FITDISTR for the MLE. But unfortunately it
only accepts ONE random variable.
Then I tried to combine x and xu in a vector, but that doesn't work, too,
because the length of x and xu differs.

Does anybody has a solution for my problem?

Thanks in advance.

Carsten

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE for two random variables

2005-03-12 Thread Spencer Graves
   Just to make sure, do you have any information on events when xu  
u, i.e. do you know how many such events and you know u for those 
events?  If yes, then that's called censoring, not truncating.  For 
that, the survival package seems pretty good.  I found the information 
in Venables and Ripley, Modern Applied Statistics with S, helpful. 

 If you don't have data when xu  u, do you know u or must that be 
estimated also?  In either case, I would likely use optim, though 
nlm might work also.  If I knew u and didn't have to estimate it, then 
I might combine the data as follows: 

XU - data.frame(x=c(x,xu),
u=c(rep(0, length(x)), rep(u, length(xu
 If I needed to estimate u, I'd modify this as follows: 

XU. - data.frame(x=c(x,xu),
u=c(rep(0, length(x)), rep(1, length(xu
 Then I'd write a function to compute, e.g, dev = 
(-2)*log(likelihood) and use optim or nlm to minimize this.  If I had to 
estimate u, I might actually try to estimate ln.up = log(u-max(xu));  I 
may or may not get better numerical convergence using this than trying 
to estimate u directly. 

 hope this helps. 
 spencer graves 

Carsten Steinhoff wrote:
Hello,
I've the following setting:
(1) Data from a source without truncation   (x)
(2) Data from an other source with left-truncation at threshold u   (xu)
I have to fit a model on these these two sources, thereby I assume that both
are drawn from the same distribution (eg lognormal). In a MLE I would sum
the densities and maximize. The R-Function could be:
function(x,xu,u,mu,sigma)
dlnorm(x,mu,sigma)+(dlnorm(xu,mu,sigma)/(plnorm(u,mu,sigma)))
So I tried to use the function FITDISTR for the MLE. But unfortunately it
only accepts ONE random variable.
Then I tried to combine x and xu in a vector, but that doesn't work, too,
because the length of x and xu differs.
Does anybody has a solution for my problem?
Thanks in advance.
Carsten
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] MLE: Question

2005-02-07 Thread h . brunschwig

Hi R users! 

I have a likelihood ratio statistic that depends on a parameter delta and I am
trying to get confidence intervals for this delta using the fact that the
likelihood ratio statistic is approx. chi-squared distributed.

For this I need to maximize the two likelihoods (for the ratio statistic) one of
which depends on delta too and I am trying to use the function mle. But delta
is neither a parameter I would like to estimate nor a fixed value that I know
the value of (to give it to the mle function with fixed=list(delta=?)). 

Can I still use mle to find the likelihood of the data (just denoting delta as
a constant)?

Thank you.

Hadassa

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE: Question

2005-02-07 Thread Peter Dalgaard
[EMAIL PROTECTED] writes:

 Hi R users! 
 
 I have a likelihood ratio statistic that depends on a parameter delta and I am
 trying to get confidence intervals for this delta using the fact that the
 likelihood ratio statistic is approx. chi-squared distributed.
 
 For this I need to maximize the two likelihoods (for the ratio statistic) one 
 of
 which depends on delta too and I am trying to use the function mle. But 
 delta
 is neither a parameter I would like to estimate nor a fixed value that I know
 the value of (to give it to the mle function with fixed=list(delta=?)). 
 
 Can I still use mle to find the likelihood of the data (just denoting delta 
 as
 a constant)?

Er, ... Why wouldn't delta be a parameter?? If you consider the
likelihood a function of delta and all the other parameters in the
model can you not maximize it and profile over delta?

-- 
   O__   Peter Dalgaard Blegdamsvej 3  
  c/ /'_ --- Dept. of Biostatistics 2200 Cph. N   
 (*) \(*) -- University of Copenhagen   Denmark  Ph: (+45) 35327918
~~ - ([EMAIL PROTECTED]) FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mle() and with()

2005-01-10 Thread Ben Bolker
  I'm trying to figure out the best way of fitting the same negative 
log-likelihood function to more than one set of data, using mle() from the 
stats4 package.

Here's what I would have thought would work:
--
library(stats4)
## simulate values
r = rnorm(1000,mean=2)
## very basic neg. log likelihood function
mll - function(mu,logsigma) {
  -sum(dnorm(r,mean=mu,sd=exp(logsigma),log=TRUE))
}
mle(minuslogl=mll,start=list(mu=1,logsigma=0))
r2 = rnorm(1000,mean=3) ## second data set
with(list(r=r2),
 mle(minuslogl=mll,start=list(mu=1,logsigma=0))
   )
-
but this doesn't work -- it fits to the original data set, not the new one 
--- presumably because mll() picks up its definition of r when it is 
*defined* -- so using with() at this point doesn't help.

  If I rm(r) then I get an 'Object r not found' error.
I can do something like the following, defining the negative 
log-likelihood function within the mle() call ...

lf = function(data) {
  mle(minuslogl=function(mu,logsigma) {
-sum(dnorm(data,mean=mu,sd=exp(logsigma),log=TRUE))
  },start=list(mu=1,logsigma=0))
}
lf(r)
lf(r2)
---
 ... and in this case there's no point using with().
 can someone help me understand this behavior and to find a clean way to 
use mle() on a predefined likelihood function that allows substitution of 
an arbitrary data set?

  R 2.0.0 on Gentoo (trying to stick with the package management system so 
haven't installed 2.0.1 yet)

 thanks,
   Ben Bolker
--
620B Bartram Hall[EMAIL PROTECTED]
Zoology Department, University of Floridahttp://www.zoo.ufl.edu/bolker
Box 118525   (ph)  352-392-5697
Gainesville, FL 32611-8525   (fax) 352-392-3704
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE, precision

2004-07-29 Thread boshao zhang
Dear Spencer:

My problem get solved by using Matlab. It runs pretty
quick(less than 5 seconds)and the result is stable
with respect to the initial values. I was amaized.
Here my t and are as long as 2390, sum the functions
over t and d, the function becomes daunting. But I
still like to try nlmb(I give up ms function in S or
nlm in R). 

But how can I sum the functions over t. To simplify
the problem, I just need to know the following:
t - 1:2;
I would like to get f = sum(t*x + 1) over t. I tried
the following:
 f-0
 for (i in 1:2){ 
   g - function(x){~t[i]*x+1}; f = f +g;
   }
Problem in f + g: needed atomic data, got an object of
class function 
Use traceback() to see the call stack

As you see, it refuse to work.

Please give me advice. 
thank you.

Boshao




--- Spencer Graves [EMAIL PROTECTED] wrote:

   Have you considered estimating ln.m1, ln.m2,
 and ln.b, which makes 
 the negative log likelihood something like the
 following: 
 
 l.ln- function(ln.m1,ln.m2,ln.b){
 m1 - exp(ln.m1); m2 - exp(ln.m2); b -
 exp(ln.b)
 lglk - d*( ln.m1 + ln.m2
  + log1p(-exp(-(b+m2)*t)
  + (m1/b-d)*log(m2+b*exp(-b+m2)*t))
   + m1*t - m1/b*log(b+m2) )
 
(-sum(lglk))
 }
 # NOT TESTED
 
 I don't know if I have this correct, but you
 should get the idea.  Parameterizing in terms of
 logarithms automatically eliminates the constraints
 that m1, m2, and b must be positive.  
 
 I also prefer to play with the function until I'm
 reasonably confident it will never produce NAs, and
 I use a few tricks to preserve numerical precision
 where I can.  For example, log(b+m2) = log(b) +
 log1p(m2/b) = log(m2) + log1p(b/m2).  If you use the
 first form when b is larger and the second when m1
 is larger, you should get more accurate answers. 
 Using, e.g.:  
 
 log.apb - function(log.a, log.b){
 min.ab - pmin(log.a, log.b)
 max.ab - pmax(log.a, log.b)
 max.ab + log1p(exp(min.ab-max.ab))
 }
 # NOT TESTED
 
 If log.a and log.b are both really large, a and b
 could be Inf when log.a and log.b are finite. 
 Computing log(a+b) like this eliminates that
 problem.  The same problem occurs when log.a and
 log.b are so far negative that a and b are both
 numerically 0, even though log.a and log.b are very
 finite.  This function eliminates that problem.  
 
 Also, have you tried plotting your l vs. m1
 with m2 and b constant, and then vs. m2 with m2 and
 b constant and vs. b with m1 and m2 constant?  Or
 (better) make contour plots of l vs. any 2 of
 these parameters with the other held constant.  When
 I've done this in crudely similar situations, I've
 typically found that the log(likelihood) was more
 nearly parabolic in terms of ln.m1, ln.m2, and ln.b
 than in terms of the untransformed variables.  This
 means that the traditional Wald confidence region
 procedures are more accurate, as they assume that
 the log(likelihood) is parabolic in the parameters
 estimated.  
 
 hope this  helps.  spencer graves
 p.s.  I suggest you avoid using t as a variable
 name:  That's the name of the function for the
 transpose of a matrix.  R and usually though not
 always tell from the context what you want. 
 However, it's best to avoid that ambiguity.  I often
 test at a command prompt variable names I want to
 use.  If the response is object not found, then I
 feel like I can use it.  
 
 boshao zhang wrote:
 
 Hi, everyone
 
 I am trying to estimate 3 parameters for my
 survival
 function. It's very complicated. The negative
 loglikelihood function is:
 
 l- function(m1,m2,b)  -sum(d*( log(m1) +
 log(m2)
 + log(1- exp(-(b + m2)*t)) ) + (m1/b - d)*log(m2 +
 b*exp(-(b + m2)*t) ) + m1*t - m1/b*log(b+m2)  )
 
 here d and t are given, sum  means sum over these
 two vairables. 
 the parameters are assumed small, m1, m2 in
 thousandth, m2 in millionth.
 
 I used the function nlm to estimate m1,m2,b. But
 the
 result is very bad. you can get more than 50
 warnings,
 most of them are about negative infinityin log.
 And
 the results are initial value dependent, or you
 will
 get nothing when you choose some values.
 
 So I tried brutal force, i.e. evaluate the values
 of
 grid point. It works well. Also, you can get the
 correct answer of log(1e-12).
 
 My questions are:
  What is the precision of a variable in R?
  How to specify the constraint interval of
 parameters
 in nlm? I tried lower, upper, it doesn't work.
 any advice on MLE is appreciated.
 
 Thank you.
 
 Boshao
 
 __
 [EMAIL PROTECTED] mailing list

https://www.stat.math.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
   
 
 


__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 

Re: [R] MLE, precision

2004-07-29 Thread Spencer Graves
 1.  Don't use t as a variable name.  It is the name of the 
matrix transpose function.  In most but not all contexts, R is smart 
enough to tell whether you want the system function or the local object. 

 2.  I can't tell from your question what you want.  PLEASE do 
read the posting guide! http://www.R-project.org/posting-guide.html;.  
You may be able to get answers to many of your questions by following 
that process.  If you follow that guide and still have a question for 
this listserve, the exercise will likely help you formulate a question 
that will be easier for people to understand, increasing thereby the 
likelihood that you will get an appropriate answer. 

 3.  I wonder if the following will help: 

 (td - outer(1:3, 4:5))
[,1] [,2]
[1,]45
[2,]8   10
[3,]   12   15
 rowSums(td)
[1]  9 18 27
 colSums(td)
[1] 24 30
 hope this helps.  spencer graves
boshao zhang wrote:
Dear Spencer:
My problem get solved by using Matlab. It runs pretty
quick(less than 5 seconds)and the result is stable
with respect to the initial values. I was amaized.
Here my t and are as long as 2390, sum the functions
over t and d, the function becomes daunting. But I
still like to try nlmb(I give up ms function in S or
nlm in R). 

But how can I sum the functions over t. To simplify
the problem, I just need to know the following:
t - 1:2;
I would like to get f = sum(t*x + 1) over t. I tried
the following:
f-0
for (i in 1:2){ 
  g - function(x){~t[i]*x+1}; f = f +g;
  }
Problem in f + g: needed atomic data, got an object of
class function 
Use traceback() to see the call stack

As you see, it refuse to work.
Please give me advice. 
thank you.

Boshao

--- Spencer Graves [EMAIL PROTECTED] wrote:
 

 Have you considered estimating ln.m1, ln.m2,
and ln.b, which makes 
the negative log likelihood something like the
following: 

l.ln- function(ln.m1,ln.m2,ln.b){
   m1 - exp(ln.m1); m2 - exp(ln.m2); b -
exp(ln.b)
   lglk - d*( ln.m1 + ln.m2
+ log1p(-exp(-(b+m2)*t)
+ (m1/b-d)*log(m2+b*exp(-b+m2)*t))
 + m1*t - m1/b*log(b+m2) )

  (-sum(lglk))
}
# NOT TESTED
	  I don't know if I have this correct, but you
should get the idea.  Parameterizing in terms of
logarithms automatically eliminates the constraints
that m1, m2, and b must be positive.  

	  I also prefer to play with the function until I'm
reasonably confident it will never produce NAs, and
I use a few tricks to preserve numerical precision
where I can.  For example, log(b+m2) = log(b) +
log1p(m2/b) = log(m2) + log1p(b/m2).  If you use the
first form when b is larger and the second when m1
is larger, you should get more accurate answers. 
Using, e.g.:  

  log.apb - function(log.a, log.b){
  min.ab - pmin(log.a, log.b)
  max.ab - pmax(log.a, log.b)
  max.ab + log1p(exp(min.ab-max.ab))
  }
  # NOT TESTED
If log.a and log.b are both really large, a and b
could be Inf when log.a and log.b are finite. 
Computing log(a+b) like this eliminates that
problem.  The same problem occurs when log.a and
log.b are so far negative that a and b are both
numerically 0, even though log.a and log.b are very
finite.  This function eliminates that problem.  

	  Also, have you tried plotting your l vs. m1
with m2 and b constant, and then vs. m2 with m2 and
b constant and vs. b with m1 and m2 constant?  Or
(better) make contour plots of l vs. any 2 of
these parameters with the other held constant.  When
I've done this in crudely similar situations, I've
typically found that the log(likelihood) was more
nearly parabolic in terms of ln.m1, ln.m2, and ln.b
than in terms of the untransformed variables.  This
means that the traditional Wald confidence region
procedures are more accurate, as they assume that
the log(likelihood) is parabolic in the parameters
estimated.  

	  hope this  helps.  spencer graves
p.s.  I suggest you avoid using t as a variable
name:  That's the name of the function for the
transpose of a matrix.  R and usually though not
always tell from the context what you want. 
However, it's best to avoid that ambiguity.  I often
test at a command prompt variable names I want to
use.  If the response is object not found, then I
feel like I can use it.  

boshao zhang wrote:
   

Hi, everyone
I am trying to estimate 3 parameters for my
 

survival
   

function. It's very complicated. The negative
loglikelihood function is:
l- function(m1,m2,b)  -sum(d*( log(m1) +
 

log(m2)
   

+ log(1- exp(-(b + m2)*t)) ) + (m1/b - d)*log(m2 +
b*exp(-(b + m2)*t) ) + m1*t - m1/b*log(b+m2)  )
here d and t are given, sum  means sum over these
two vairables. 
the parameters are assumed small, m1, m2 in
thousandth, m2 in millionth.

I used the function nlm to estimate m1,m2,b. But
 

the
   

result is very bad. you can get more than 50
 

warnings,
   

most of them are about negative infinityin log.
 

And
   

the 

[R] MLE, precision

2004-07-13 Thread boshao zhang
Hi, everyone

I am trying to estimate 3 parameters for my survival
function. It's very complicated. The negative
loglikelihood function is:

l- function(m1,m2,b)  -sum(d*( log(m1) + log(m2)
+ log(1- exp(-(b + m2)*t)) ) + (m1/b - d)*log(m2 +
b*exp(-(b + m2)*t) ) + m1*t - m1/b*log(b+m2)  )

here d and t are given, sum  means sum over these
two vairables. 
the parameters are assumed small, m1, m2 in
thousandth, m2 in millionth.

I used the function nlm to estimate m1,m2,b. But the
result is very bad. you can get more than 50 warnings,
most of them are about negative infinityin log. And
the results are initial value dependent, or you will
get nothing when you choose some values.

So I tried brutal force, i.e. evaluate the values of
grid point. It works well. Also, you can get the
correct answer of log(1e-12).

My questions are:
 What is the precision of a variable in R?
 How to specify the constraint interval of parameters
in nlm? I tried lower, upper, it doesn't work.
any advice on MLE is appreciated.

Thank you.

Boshao

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] MLE, precision

2004-07-13 Thread Spencer Graves
 Have you considered estimating ln.m1, ln.m2, and ln.b, which makes 
the negative log likelihood something like the following: 

l.ln- function(ln.m1,ln.m2,ln.b){
   m1 - exp(ln.m1); m2 - exp(ln.m2); b - exp(ln.b)
   lglk - d*( ln.m1 + ln.m2
+ log1p(-exp(-(b+m2)*t)
+ (m1/b-d)*log(m2+b*exp(-b+m2)*t))
 + m1*t - m1/b*log(b+m2) )

  (-sum(lglk))
}
# NOT TESTED
	  I don't know if I have this correct, but you should get the idea.  Parameterizing in terms of logarithms automatically eliminates the constraints that m1, m2, and b must be positive.  

	  I also prefer to play with the function until I'm reasonably confident it will never produce NAs, and I use a few tricks to preserve numerical precision where I can.  For example, log(b+m2) = log(b) + log1p(m2/b) = log(m2) + log1p(b/m2).  If you use the first form when b is larger and the second when m1 is larger, you should get more accurate answers.  Using, e.g.:  

  log.apb - function(log.a, log.b){
  min.ab - pmin(log.a, log.b)
  max.ab - pmax(log.a, log.b)
  max.ab + log1p(exp(min.ab-max.ab))
  }
  # NOT TESTED
If log.a and log.b are both really large, a and b could be Inf when log.a and log.b are finite.  Computing log(a+b) like this eliminates that problem.  The same problem occurs when log.a and log.b are so far negative that a and b are both numerically 0, even though log.a and log.b are very finite.  This function eliminates that problem.  

	  Also, have you tried plotting your l vs. m1 with m2 and b constant, and then vs. m2 with m2 and b constant and vs. b with m1 and m2 constant?  Or (better) make contour plots of l vs. any 2 of these parameters with the other held constant.  When I've done this in crudely similar situations, I've typically found that the log(likelihood) was more nearly parabolic in terms of ln.m1, ln.m2, and ln.b than in terms of the untransformed variables.  This means that the traditional Wald confidence region procedures are more accurate, as they assume that the log(likelihood) is parabolic in the parameters estimated.  

	  hope this  helps.  spencer graves
p.s.  I suggest you avoid using t as a variable name:  That's the name of the function for the transpose of a matrix.  R and usually though not always tell from the context what you want.  However, it's best to avoid that ambiguity.  I often test at a command prompt variable names I want to use.  If the response is object not found, then I feel like I can use it.  

boshao zhang wrote:
Hi, everyone
I am trying to estimate 3 parameters for my survival
function. It's very complicated. The negative
loglikelihood function is:
l- function(m1,m2,b)  -sum(d*( log(m1) + log(m2)
+ log(1- exp(-(b + m2)*t)) ) + (m1/b - d)*log(m2 +
b*exp(-(b + m2)*t) ) + m1*t - m1/b*log(b+m2)  )
here d and t are given, sum  means sum over these
two vairables. 
the parameters are assumed small, m1, m2 in
thousandth, m2 in millionth.

I used the function nlm to estimate m1,m2,b. But the
result is very bad. you can get more than 50 warnings,
most of them are about negative infinityin log. And
the results are initial value dependent, or you will
get nothing when you choose some values.
So I tried brutal force, i.e. evaluate the values of
grid point. It works well. Also, you can get the
correct answer of log(1e-12).
My questions are:
What is the precision of a variable in R?
How to specify the constraint interval of parameters
in nlm? I tried lower, upper, it doesn't work.
any advice on MLE is appreciated.
Thank you.
Boshao
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] mle in the gamma model

2003-11-24 Thread Dominique Couturier
Dear [R]-list,

I'm looking for a classic equivalent of the wle.gamma function (library 
wle) that estimate robustly the shape and the scale parameters of gamma 
data.
I have a vector of iid gamma rv :
data=rgamma(100,shape=10,scale=3)
and a vector of their weights:
weights=c(rep(.5/70,70),rep(.25/20,20),rep(.25/10,10))
and want to estimate the scale and shape of the gamma distribution.
Does such a function exists?

thanks,
DLC
__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


Re: [R] mle in the gamma model

2003-11-24 Thread Prof Brian Ripley
library(MASS)
?fitdistr

You can also use survreg (survival) and glm (with some help from functions 
in MASS for the shape).

On Mon, 24 Nov 2003, Dominique Couturier wrote:

 I'm looking for a classic equivalent of the wle.gamma function (library 
 wle) that estimate robustly the shape and the scale parameters of gamma 
 data.
 I have a vector of iid gamma rv :
  data=rgamma(100,shape=10,scale=3)
 and a vector of their weights:
  weights=c(rep(.5/70,70),rep(.25/20,20),rep(.25/10,10))
 and want to estimate the scale and shape of the gamma distribution.
 Does such a function exists?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help