Re: [R] TeX distribution on Windows

2005-09-06 Thread Göran Broström
On Mon, Sep 05, 2005 at 06:55:26PM -0400, Duncan Murdoch wrote:
 Göran Broström wrote:
 I'm looking for a Windows distribution of TeX that works with  R, after a 
 few years' absence from Windows. On Duncan Murdoch's Rtools page fptex is 
 still recommended, but it turns out that fptex is defunct as of May 2005,
 see 
 
 http://www.metz.supelec.fr/~popineau/xemtex-7.html
 
 So, what is suggested? TUG (tug.org) recommends something called proTeXt,
 which is said to be based on MiKTeX, for Windows users. Since MikTeX 
 could be used with  R, that sounds like a good alternative.
 
 I use MikTeX, with one or another of the workarounds listed on my page. 
   I've never tried proTeXt; I did a little googling, but I still don't 
 see the point of it exactly.

It's just MiKTeX with a few extras, like WinEdt, ghostscript, etc. As far 
as I understand, MikTeX itself is untouched.

 
 fptex is still available in various repositories, and is likely to keep 
 working for quite a long time:  R doesn't demand the latest and greatest 
 innovations from TeX/eTeX.

Right. But maybe you should change the broken link to www.fptex.org.

Göran

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Mulholland, Tom
For some reason (probably that our organisation has blocked the site) I could 
not see the original articles that prompted the post. I however immediately 
assumed that this was precipitated by Tufte and his comments about PowerPoint 
(I recall seeing a good example of PowerPoint on his site) 
http://www.edwardtufte.com/tufte/powerpoint

When this first came up I recall some dispute about the comments 
www.sociablemedia.com/articles_dispute.htm and that John Fox did something 
http://ils.unc.edu/~jfox/powerpoint/introduction.html that I enjoyed reading.

Other links that are lying on my computer are
In defense of PowerPoint http://www.jnd.org/dn.mss/in_defense_of_powerp.html
and Does PowerPoint make you stupid? at 
http://www.presentations.com/presentations/delivery/article_display.jsp?vnu_content_id=1000482464
 
Tom

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Tim Churches
 Sent: Saturday, 3 September 2005 10:08 AM
 To: [EMAIL PROTECTED]
 Cc: Achim Zeileis; r-help@stat.math.ethz.ch
 Subject: Re: [R] The Perils of PowerPoint
 
 
 (Ted Harding) wrote:
 
 By the way, the Washington Post/Minneapolis Star Tribune article is
 somewhat reminiscent of a short (15 min) broadcast on BBC Radio 4
 back on October 18 2004 15:45-16:00 called
 
   Microsoft Powerpoint and the Decline of Civilisation
 
 which explores similar themes and also frequently quotes Tufte.
 Unfortunately it lapsed for ever from Listen Again after the
 statutory week, so I can't point you to a replay. (However, I
 have carefully preserved the cassette recording I made).
   
 
 Try http://sooper.org/misc/powerpoint.mp3 (copyright law 
 notwithstanding...)
 
 Tim C
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] model selection vs. H0 based testing

2005-09-06 Thread Thomas Petzoldt
Hello,

I wish to thank Douglas Bates very much for clarification and pointing 
me to the MCMC simulation method to get p values even for cases where 
Wald tests are inappropriate.

One question however remains when publishing statistical results: does 
it help readers if we combine both,

- AIC based model selection
*and*
- null hypothesis based tests statistics

or should we focus on model selection only and try to reduce the amount 
of tables provided?

Apologies if this is question is too much off-topic, so you may decide 
to answer off-list. I will give a short summary at the end.

Thomas P.

An article explaining the background:

Johnson, J.  Omland, K.S., Model Selection in Ecology and Evolution. 
Trends in Ecology and Evolution, 2004, 19 , 101-108

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] SpatStat Kest - Error Message help

2005-09-06 Thread Martin Maechler
 AB == Adrian Baddeley [EMAIL PROTECTED]
 on Mon, 5 Sep 2005 10:10:31 +0800 writes:

AB On Thu, 1 Sep 2005, DrakeGis wrote:
 Hi I'm working with the function Kest in the package SpatStat (under 
LINUX
 with R 2.1.0). In order to evaluate the statistical significance of my
 point pattern I'm doing 999 Montecarlo replications. The script that use
 the Kest function runs OK for most of the different point patterns that I
 have but for a particular point pattern, which have only 17 points, it
 runs until the 34th iteration and then I receive this message:
 
 Error in [-(`*tmp*`, index, value = NULL) :
 incompatible types (1000) in subassignment type fix
 Execution halted
 
 Do you have any idea about what could be the cause of this ? Thanks in
 advance.

AB This is not an error message from 'spatstat' itself.

AB The message has been generated by the function [- 
AB which is called when you assign values to a subset of a dataset 
AB (in a command like x[z] - v). The message appears to say that the
AB replacement value v is not of the same type as the original vector x. 
  
yes.
And please get into the habit of saying

traceback()

after such an error.
This would have quickly revealed if the error came from R function
called from a function from 'spatstat' or not.

Also, maybe more people should learn about `basic debugging', 
by using something like

   options(error = recover)
or
   options(error = dump.frames) ## needs a later call to debugger()

before running the script that produces the error.
The end of the examples  ?options  show an example to use when
running an R script.
  
AB You say that you are running a script that uses the Kest function.
AB The error is probably inside that script. If you send the script to us
AB we can probably spot the problem for you.

AB As Rolf mentioned in his email, spatstat provides a
AB command envelope to compute simulation envelopes. This
AB might be sufficient for your needs.

AB regards
AB Adrian Baddeley

Regards,
Martin Maechler

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] r: chinese installation of r

2005-09-06 Thread Clark Allan
can any one help:

A friends query:
My pc is using the chinese version windows xp, so when I installed R
Chinese was 
automatically selected as the default language.How can I change it? It
brings a lot of 
trouble since some of the output is in chinese too.__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] : r: chinese installation of r

2005-09-06 Thread 0034058
In the selection components step of installation,uncheck 
the message translations.Then it will be OK!

- 原邮件 -
从: Clark Allan [EMAIL PROTECTED]
日期: 星期二, 九月 6日, 2005 下午5:55
主题: [R] r: chinese installation of r
 can any one help:
 
 A friends query:
 My pc is using the chinese version windows xp, so when I 
 installed R
 Chinese was 
 automatically selected as the default language.How can I change 
 it? It
 brings a lot of 
 trouble since some of the output is in chinese too.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] r: chinese installation of r

2005-09-06 Thread Thomas Petzoldt
Clark Allan schrieb:
 can any one help:
 
 A friends query:
 My pc is using the chinese version windows xp, so when I installed R
 Chinese was 
 automatically selected as the default language.How can I change it? It
 brings a lot of 
 trouble since some of the output is in chinese too.


The R admin manual

http://cran.r-project.org/doc/manuals/R-admin.html

says:

The preferred language for messages is by default taken from the 
locale. This can be overridden first by the setting of the environment 
variable LANGUAGE and then by the environment variables LC_ALL, 
LC_MESSAGES and LANG. (The last three are normally used to set the 
locale and so should not be needed, but the first is only used to select 
the language for messages.) The code tries hard to map locale names to 
languages, even on Windows.

Note that you should not expect to be able to change the language once R 
is running. 

If your system runs on Windows, define a variable LANGUAGE in the 
systems settings (environment) and set it to EN.

Hope it helps

Thomas Petzoldt

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R: optim

2005-09-06 Thread Clark Allan
hi all

i dont understand the error message that is produced by the optim
function. can anybody help???

ie: 
[[1]]$message
[1] CONVERGENCE: REL_REDUCTION_OF_F = FACTR*EPSMCH

can anyone help?



###

SK.FIT(XDATA=a,XDATAname=a,PHI1=1,v=5,vlo=2,vhi=300,phi2lo=.01)
[[1]]
[[1]]$par
[1]  -0.01377906   0.83859445   0.34675230 300.

[[1]]$value
[1] 90.59185

[[1]]$counts
function gradient 
  53   53 

[[1]]$convergence
[1] 0

[[1]]$message
[1] CONVERGENCE: REL_REDUCTION_OF_F = FACTR*EPSMCH

#



i ghave included the function used in the optim call:

SKEWMLE=function(l,DATA=XDATA,...)
{
#alpha = l[1]
#beta = l[2]
#phi2 = l[3]
#v= l[4]
phi1=PHI1

DATA-as.matrix(DATA)

fnew-function(x,y,l,...)
{
#when we do not estimate phi1
t1=(1+((y-l[1]-l[2]*x)^2)/(l[4]*l[3]^2))^(-0.5*(1+l[4]))
t2=(1+(x^2)/l[4])^(-0.5*(1+l[4]))

t3=2*((gamma(0.5*(1+l[4]))/(gamma(0.5*l[4])*sqrt(l[4]*pi)))^2)/l[3]

t1*t2*t3
}

a-double(length(DATA))
y=DATA
a=apply(y,1,function(q)
log(integrate(fnew,lower=0,upper=Inf,y=q,l=l)$value))
-sum(a)
}__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Doubt about nested aov output

2005-09-06 Thread Ronaldo Reis-Jr.
Hi Spencer,

Em Dom 04 Set 2005 20:31, Spencer Graves escreveu:
 Others may know the answer to your question, but I don't.  However,
 since I have not seen a reply, I will offer a few comments:

 1.  What version of R are you using?  I just tried superficially
 similar things with the examples in ?aov in R 2.1.1 patched and
 consistently got F and p values.

I'm using the R version 2.1.1 on Linux Debian
Version 2.1.1  (2005-06-20), ISBN 3-900051-07-0

 2.  My preference for this kind of thing is to use lme in
 library(nlme) or lmer in library(lme4).  Also, I highly recommend
 Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer).

Yes, this is my preference too, but I need aov for classes.

 3.  If still want to use aov and are getting this problem in R 2.1.1,
 could you please provide this list with a small, self contained example
 that displays the symptoms that concern you?  And PLEASE do read the
 posting guide! http://www.R-project.org/posting-guide.html;.  It might
 increase the speed and utility of replies.

 spencer graves

I send the complete example. This is a example from the Crwaley's book 
(Statistical Computing: An introdution to data analysis using S-Plus.

This is a classical experiment to show pseudoreplication, from Sokal and Rohlf 
(1995).

In this experiments, It have 3 treatmens applied to 6 rats, for each rat it  
make 3 liver preparation and for each liver it make 2 readings of glycogen. 
This generated 6 pseudoreplication per rat. I'm interested on the effect os 
treatment on the glycogen readings.

Look the R analyses:


 Glycogen - 
c(131,130,131,125,136,142,150,148,140,143,160,150,157,145,154,142,147,153,151,155,147,147,162,152,134,125,138,138,135,136,138,140,139,138,134,127)
 Glycogen
 [1] 131 130 131 125 136 142 150 148 140 143 160 150 157 145 154 142 147 153 
151
[20] 155 147 147 162 152 134 125 138 138 135 136 138 140 139 138 134 127
 Treatment - factor(rep(c(1,2,3),c(12,12,12)))
 Treatment
 [1] 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
Levels: 1 2 3
 Rat - factor(rep(rep(c(1,2),c(6,6)),3))
 Rat
 [1] 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 2 2 2 2 2 2
Levels: 1 2
 Liver - factor(rep(rep(c(1,2,3),c(2,2,2)),6))
 Liver
 [1] 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3
Levels: 1 2 3
 
 ### Model made identical to the book
 
 model - aov(Glycogen~Treatment/Rat/Liver+Error(Treatment/Rat/Liver))
 
 summary(model)

Error: Treatment
  Df  Sum Sq Mean Sq
Treatment  2 1557.56  778.78

Error: Treatment:Rat
  Df Sum Sq Mean Sq
Treatment:Rat  3 797.67  265.89

Error: Treatment:Rat:Liver
Df Sum Sq Mean Sq
Treatment:Rat:Liver 12  594.049.5

Error: Within
  Df Sum Sq Mean Sq F value Pr(F)
Residuals 18 381.00   21.17   
 
 ### Model made by myself, I'm interested only in Treatment effects
 
 model - aov(Glycogen~Treatment+Error(Treatment/Rat/Liver))
 
 summary(model)

Error: Treatment
  Df  Sum Sq Mean Sq
Treatment  2 1557.56  778.78

Error: Treatment:Rat
  Df Sum Sq Mean Sq F value Pr(F)
Residuals  3 797.67  265.89   

Error: Treatment:Rat:Liver
  Df Sum Sq Mean Sq F value Pr(F)
Residuals 12  594.049.5   

Error: Within
  Df Sum Sq Mean Sq F value Pr(F)
Residuals 18 381.00   21.17   


What it dont calculate the F and P for treatment?

Thanks
Ronaldo

-- 
Tristezas não pagam dívidas. Nem bravatas, por
falar nisso.

--Millôr Fernandes
Retirado de http://www.uol.com.br/millor
--
|   // | \\   [***]
|   ( õ   õ )  [Ronaldo Reis Júnior]
|  V  [UFV/DBA-Entomologia]
|/ \   [36570-000 Viçosa - MG  ]
|  /(.''`.)\  [Fone: 31-3899-4007 ]
|  /(: :'  :)\ [EMAIL PROTECTED]]
|/ (`. `'` ) \[ICQ#: 5692561 | LinuxUser#: 205366 ]
|( `-  )   [***]
|  _/   \_Powered by GNU/Debian Woody/Sarge

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Duncan Murdoch
Mulholland, Tom wrote:
 For some reason (probably that our organisation has blocked the site) I could 
 not see the original articles that prompted the post. I however immediately 
 assumed that this was precipitated by Tufte and his comments about PowerPoint 
 (I recall seeing a good example of PowerPoint on his site) 
 http://www.edwardtufte.com/tufte/powerpoint
 
 When this first came up I recall some dispute about the comments 
 www.sociablemedia.com/articles_dispute.htm and that John Fox did something 
 http://ils.unc.edu/~jfox/powerpoint/introduction.html that I enjoyed 
reading.

I think that's by a different Fox named Jackson, not John.  It's an 
interesting reading, though.

Duncan Murdoch
 
 Other links that are lying on my computer are
 In defense of PowerPoint http://www.jnd.org/dn.mss/in_defense_of_powerp.html
 and Does PowerPoint make you stupid? at 
 http://www.presentations.com/presentations/delivery/article_display.jsp?vnu_content_id=1000482464
  
 Tom
 
 
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Tim Churches
Sent: Saturday, 3 September 2005 10:08 AM
To: [EMAIL PROTECTED]
Cc: Achim Zeileis; r-help@stat.math.ethz.ch
Subject: Re: [R] The Perils of PowerPoint


(Ted Harding) wrote:


By the way, the Washington Post/Minneapolis Star Tribune article is
somewhat reminiscent of a short (15 min) broadcast on BBC Radio 4
back on October 18 2004 15:45-16:00 called

 Microsoft Powerpoint and the Decline of Civilisation

which explores similar themes and also frequently quotes Tufte.
Unfortunately it lapsed for ever from Listen Again after the
statutory week, so I can't point you to a replay. (However, I
have carefully preserved the cassette recording I made).
 


Try http://sooper.org/misc/powerpoint.mp3 (copyright law 
notwithstanding...)

Tim C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R: optim

2005-09-06 Thread Douglas Bates
On 9/6/05, Clark Allan [EMAIL PROTECTED] wrote:
 hi all
 
 i dont understand the error message that is produced by the optim
 function. can anybody help???
 
 ie:
 [[1]]$message
 [1] CONVERGENCE: REL_REDUCTION_OF_F = FACTR*EPSMCH
 
 can anyone help?

That code indicates that the optimizer has declared convergence
because the relative reduction in the objective function in successive
iterates is below a tolerance.  As documented in ?optim, a convergence
code of 0 indicates success

...
convergence: An integer code. '0' indicates successful convergence.
  Error codes are
...

This may be counter-intuitive but it does make sense to shell
programmers.  The idea is that there is only one way you can succeed
but there are many different ways of failing so you use the nonzero
codes to indicate the types of failure and the zero code, which we
usually read as FALSE in a logical context, to indicate success.

 
 
 
 ###
 
 SK.FIT(XDATA=a,XDATAname=a,PHI1=1,v=5,vlo=2,vhi=300,phi2lo=.01)
 [[1]]
 [[1]]$par
 [1]  -0.01377906   0.83859445   0.34675230 300.
 
 [[1]]$value
 [1] 90.59185
 
 [[1]]$counts
 function gradient
   53   53
 
 [[1]]$convergence
 [1] 0
 
 [[1]]$message
 [1] CONVERGENCE: REL_REDUCTION_OF_F = FACTR*EPSMCH
 
 #
 
 
 
 i ghave included the function used in the optim call:
 
 SKEWMLE=function(l,DATA=XDATA,...)
 {
 #alpha = l[1]
 #beta = l[2]
 #phi2 = l[3]
 #v= l[4]
 phi1=PHI1
 
 DATA-as.matrix(DATA)
 
 fnew-function(x,y,l,...)
 {
 #when we do not estimate phi1
 
 t1=(1+((y-l[1]-l[2]*x)^2)/(l[4]*l[3]^2))^(-0.5*(1+l[4]))
 t2=(1+(x^2)/l[4])^(-0.5*(1+l[4]))
 
 t3=2*((gamma(0.5*(1+l[4]))/(gamma(0.5*l[4])*sqrt(l[4]*pi)))^2)/l[3]
 
 t1*t2*t3
 }
 
 a-double(length(DATA))
 y=DATA
 a=apply(y,1,function(q)
 log(integrate(fnew,lower=0,upper=Inf,y=q,l=l)$value))
 -sum(a)
 }
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] model selection vs. H0 based testing

2005-09-06 Thread Ruben Roa
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Thomas Petzoldt
 Sent: 06 September 2005 06:34
 Cc: [EMAIL PROTECTED]; R-Help
 Subject: Re: [R] model selection vs. H0 based testing
 
 
 Hello,
 
 I wish to thank Douglas Bates very much for clarification and 
 pointing me to the MCMC simulation method to get p values even for cases 
 where 
 Wald tests are inappropriate.
 
 One question however remains when publishing statistical 
 results: does it help readers if we combine both,
 
 - AIC based model selection
 *and*
 - null hypothesis based tests statistics
 
 or should we focus on model selection only and try to reduce 
 the amount of tables provided?

IMHO the AIC is sufficient and the null hypothesis test is
not well suited to the problem. As stated by Akaike (1974,
A new look at the statistical model identification, IEEE
Transactions on Automatic Control 19:716-723):As was noticed by
Lehman [this is the classic book on the Neyman-Pearson theory of
hypothesis testing], hypothesis testing procedures are traditionally
applied to the situation where actually multiple decision procedures
are required. If the statistical identification procedure is considered
as a decision procedure the very basic problem is the appropriate choice
of the loss function. In the Neyman-Pearson theory of statistical
hypothesis testing only the probabilities of rejecting and accepting
the correct and incorrect hypothesis, respectively, are considered
to define the loss caused by the decision. In practical situations
the assumed null hypotheses are only approximations and they
are almost always different from the reality. Thus the choice of the
loss function in the test theory makes its practical application
logically contradictory. The recongnition of this point that the
hypothesis testing procedure is not adequately formulated as a 
procedure of approximation is very important for the development
of practically useful identification procedures.
Note that Akaike speaks of 'model identification' whereas now this
subject are is usually referred to as 'model selection'.
Ruben

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] help.search problem

2005-09-06 Thread Uzuner, Tolga
Dear Fellow R Users,

I have recently come across a weird problem with help.search:

 help.search(tps)
Error in rbind(...) : number of columns of matrices must match (see arg 8)
 

This happens no matter what I search for...

Any thoughts ?
Thanks,
Tolga

Please follow the attached hyperlink to an important disclaimer
http://www.csfb.com/legal_terms/disclaimer_europe.shtml



==
Please access the attached hyperlink for an important electronic communications 
disclaimer: 

http://www.csfb.com/legal_terms/disclaimer_external_email.shtml

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] fitting distributions with R

2005-09-06 Thread Nadja Riedwyl
Dear all
I've got the dataset
data:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;11491;
          14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334
I know from other testing that it should be possible to fit the data with the 
exponentialdistribution. I tried to get parameterestimates for the 
exponentialdistribution with R, but as the values 
of the parameter are very close to 0 i get into troubles. Do you know, what i 
could do in order to get estimates?How do you choose the starting values? in 
my opinion it should be around 1/mean(data).

 
#Parameterestimation  with mle() with the log-likelihood funktion of the  
#exponentialdistribution
library(stats4) 
ll-function(beta) 
{n-24 
x-data2
-n*log(beta)+beta*sum(x)} 
est-mle(minuslog=ll, start=list(beta=0.1))
summary(est) 

#instead of a result, i get:


Error in optim(start, f, method = method, hessian = TRUE, ...) :
        non-finite finite-difference value [1]
In addition: There were 50 or more warnings (use warnings() to see the first 
50)
#with fitdistr() for the exponentialdistribution
library(MASS)
fitdistr(data2,densfun=dexp,start=list(rate=0.1),lower=6e-06,method=BFGS)

#instead of a result, i get

Error in optim(start, mylogfn, x = x, hessian = TRUE, ...) :
        non-finite finite-difference value [1]
In addition: Warning messages:
1: bounds can only be used with method L-BFGS-B in: optim(start, mylogfn, x = 
x, hessian = TRUE, ...)
2: NaNs produced in: dexp(x, 1/rate, log)


i'll be very happy for any help i can get to solve this problem
thank you!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fitting distributions with R

2005-09-06 Thread Huntsinger, Reid
The MLE of beta is the reciprocal of the sample mean, so you don't need an
optimizer here. 

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Nadja Riedwyl
Sent: Tuesday, September 06, 2005 9:39 AM
To: r-help@stat.math.ethz.ch
Subject: [R] fitting distributions with R


Dear all
I've got the dataset
data:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;11491;
          14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334
I know from other testing that it should be possible to fit the data with
the 
exponentialdistribution. I tried to get parameterestimates for the 
exponentialdistribution with R, but as the values 
of the parameter are very close to 0 i get into troubles. Do you know, what
i 
could do in order to get estimates?How do you choose the starting values? in

my opinion it should be around 1/mean(data).

 
#Parameterestimation  with mle() with the log-likelihood funktion of the  
#exponentialdistribution
library(stats4) 
ll-function(beta) 
{n-24 
x-data2
-n*log(beta)+beta*sum(x)} 
est-mle(minuslog=ll, start=list(beta=0.1))
summary(est) 

#instead of a result, i get:


Error in optim(start, f, method = method, hessian = TRUE, ...) :
        non-finite finite-difference value [1]
In addition: There were 50 or more warnings (use warnings() to see the first

50)
#with fitdistr() for the exponentialdistribution
library(MASS)
fitdistr(data2,densfun=dexp,start=list(rate=0.1),lower=6e-06,method=BFGS)

#instead of a result, i get

Error in optim(start, mylogfn, x = x, hessian = TRUE, ...) :
        non-finite finite-difference value [1]
In addition: Warning messages:
1: bounds can only be used with method L-BFGS-B in: optim(start, mylogfn, x
= 
x, hessian = TRUE, ...)
2: NaNs produced in: dexp(x, 1/rate, log)


i'll be very happy for any help i can get to solve this problem
thank you!

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Ted Harding
On 06-Sep-05 Mulholland, Tom wrote:
 For some reason (probably that our organisation has blocked the site) I
 could not see the original articles that prompted the post. I however
 immediately assumed that this was precipitated by Tufte and his
 comments about PowerPoint (I recall seeing a good example of PowerPoint
 on his site) http://www.edwardtufte.com/tufte/powerpoint
 
 When this first came up I recall some dispute about the comments
 www.sociablemedia.com/articles_dispute.htm and that John Fox did
 something http://ils.unc.edu/~jfox/powerpoint/introduction.html that I
 enjoyed reading.
 
 Other links that are lying on my computer are
 In defense of PowerPoint
 http://www.jnd.org/dn.mss/in_defense_of_powerp.html
 and Does PowerPoint make you stupid? at
http://www.presentations.com/presentations/delivery/
article_display.jsp?vnu_content_id=1000482464
  
 Tom

Thanks, Tom, for these pointers to interesting discussions!
One must of course agree with the general comments to the effect
that the quality and merits of a presentation are the result of
choices made by the person who designed it, and not primarily due
to the software itself. It is also true that software such as
PowerPoint provides ready-made mechanisms for linking-in a great
variety of content, thereby making it -- in principle -- easier
for the designer to choose judiciously what would be best for the
result they wish to achieve and -- in principle -- to design an
outstanding presentation.

It is nevertheless still true that in practice the result is often
dreadful, for reasons which largely reside in the software (but
which take effect by virtue of user deficiency).

I tend to put this down to the provision of so-called Wizards
-- in reality electronic snake-oil merchants -- the protoype of
which is the dancing paper-clip masquerading as an Office
Assistant. There are other resources which can have similar
effects -- spell-checkers, grammar-checkers, auto-formatters
which brush you aside and re-arrange your intentions and which
can be difficult to evade: indeed, one can form the impression
that it has been deliberately made difficult for users to ignore
these things and make their own choices.

In case you may wonder how I hope to bring this On-Topic, it is
as follows. The result of such things is that users' thought
and practice become software-led and software-driven. The software
is both carrot and stick. The user is the donkey.

In contrast, as software and in its implementation as a compendium
of resources and documentation, R expects users to know what they
are doing and to understand the rationale of the methods. R also
requires users to have the capability to locate necessary inforamtion
in the documentation. Indeed, one might even describe R documentation
as notoriously unintrusive!

So using R should educate users in thoughtful and judicious use of
statistical software. The same cannot be said so wholeheartedly of
S-Plus. While the latter is basically routine-equivalent to R, and
the help and menu systems properly used can also encourage judicious
use, there is nevertheless a superficial aspect which can seduce users
into a check-box mentality; and the printed manuals strike me as
both unclear and unduly prescriptive.

In other words, while S-Plus may tend to attract users who do not
know what to do and who expect the softare to tell them what to do
(and subsequently will not know what they have done), R will not.

This spartan environment is lean and healthy, so successful R users
will become lean and healthy! Not donkeys, but mountain-goats.
R-help is there for those who need it, and very few responses to
queries have been at all superficial. Often it is clear that
respondents themselves have had to think before being able to come
up with an answer, and very often the response urges the questioner
to think! Indeed, evidence of thought on the part of the questioner
is something of a pre-requisite for getting a response.

The underlying thought behind all this is that there is something
of an under-current of disquiet in the statistical community about
software-driven analysis, an increasingly prevalent abuse of our
subject. Occasionally it comes to the surface. Crass abuses such
as are encouraged by PowerPoint snake-oil and the like are obvious;
but once we perceive them we can be sensitised to similar but more
subtle dangers in other software. Conscious remedial effort would
be a good thing, and R seems to be an excellent vehicle for it.

Thanks for reading so far!

Best wishes to all,
Ted.



E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 06-Sep-05   Time: 14:29:26
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting 

Re: [R] help.search problem

2005-09-06 Thread Henrik Bengtsson
What version of R and what operating system?  What packages do you have 
loaded?

Try utils::help.search(tps), does that work? Have you tried it in a 
fresh R session, i.e. start with R --vanilla.

If you can't get it to work after this, report the above information 
plus what you get from traceback() after you get the error.

Cheers

Henrik

Uzuner, Tolga wrote:
 Dear Fellow R Users,
 
 I have recently come across a weird problem with help.search:
 
 
help.search(tps)
 
 Error in rbind(...) : number of columns of matrices must match (see arg 8)
 
 
 This happens no matter what I search for...
 
 Any thoughts ?
 Thanks,
 Tolga
 
 Please follow the attached hyperlink to an important disclaimer
 http://www.csfb.com/legal_terms/disclaimer_europe.shtml
 
 
 
 ==
 Please access the attached hyperlink for an important electronic 
 communications disclaimer: 
 
 http://www.csfb.com/legal_terms/disclaimer_external_email.shtml
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Doubt about nested aov output

2005-09-06 Thread Douglas Bates
On 9/6/05, Ronaldo Reis-Jr. [EMAIL PROTECTED] wrote:
 Hi Spencer,
 
 Em Dom 04 Set 2005 20:31, Spencer Graves escreveu:
  Others may know the answer to your question, but I don't.  However,
  since I have not seen a reply, I will offer a few comments:
 
  1.  What version of R are you using?  I just tried superficially
  similar things with the examples in ?aov in R 2.1.1 patched and
  consistently got F and p values.
 
 I'm using the R version 2.1.1 on Linux Debian
 Version 2.1.1  (2005-06-20), ISBN 3-900051-07-0
 
  2.  My preference for this kind of thing is to use lme in
  library(nlme) or lmer in library(lme4).  Also, I highly recommend
  Pinheiro and Bates (2000) Mixed-Effects Models in S and S-Plus (Springer).
 
 Yes, this is my preference too, but I need aov for classes.
 
  3.  If still want to use aov and are getting this problem in R 
  2.1.1,
  could you please provide this list with a small, self contained example
  that displays the symptoms that concern you?  And PLEASE do read the
  posting guide! http://www.R-project.org/posting-guide.html;.  It might
  increase the speed and utility of replies.
 
  spencer graves
 
 I send the complete example. This is a example from the Crwaley's book
 (Statistical Computing: An introdution to data analysis using S-Plus.
 
 This is a classical experiment to show pseudoreplication, from Sokal and Rohlf
 (1995).
 
 In this experiments, It have 3 treatmens applied to 6 rats, for each rat it
 make 3 liver preparation and for each liver it make 2 readings of glycogen.
 This generated 6 pseudoreplication per rat. I'm interested on the effect os
 treatment on the glycogen readings.
 
 Look the R analyses:
 
 
  Glycogen -
 c(131,130,131,125,136,142,150,148,140,143,160,150,157,145,154,142,147,153,151,155,147,147,162,152,134,125,138,138,135,136,138,140,139,138,134,127)
  Glycogen
  [1] 131 130 131 125 136 142 150 148 140 143 160 150 157 145 154 142 147 153
 151
 [20] 155 147 147 162 152 134 125 138 138 135 136 138 140 139 138 134 127
  Treatment - factor(rep(c(1,2,3),c(12,12,12)))
  Treatment
  [1] 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3
 Levels: 1 2 3
  Rat - factor(rep(rep(c(1,2),c(6,6)),3))
  Rat
  [1] 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 2 2 2 2 2 2 1 1 1 1 1 1 2 2 2 2 2 2
 Levels: 1 2
  Liver - factor(rep(rep(c(1,2,3),c(2,2,2)),6))
  Liver
  [1] 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3 1 1 2 2 3 3
 Levels: 1 2 3
 
  ### Model made identical to the book
 
  model - aov(Glycogen~Treatment/Rat/Liver+Error(Treatment/Rat/Liver))
 
  summary(model)
 
 Error: Treatment
   Df  Sum Sq Mean Sq
 Treatment  2 1557.56  778.78
 
 Error: Treatment:Rat
   Df Sum Sq Mean Sq
 Treatment:Rat  3 797.67  265.89
 
 Error: Treatment:Rat:Liver
 Df Sum Sq Mean Sq
 Treatment:Rat:Liver 12  594.049.5
 
 Error: Within
   Df Sum Sq Mean Sq F value Pr(F)
 Residuals 18 381.00   21.17
 
  ### Model made by myself, I'm interested only in Treatment effects
 
  model - aov(Glycogen~Treatment+Error(Treatment/Rat/Liver))
 
  summary(model)
 
 Error: Treatment
   Df  Sum Sq Mean Sq
 Treatment  2 1557.56  778.78
 
 Error: Treatment:Rat
   Df Sum Sq Mean Sq F value Pr(F)
 Residuals  3 797.67  265.89
 
 Error: Treatment:Rat:Liver
   Df Sum Sq Mean Sq F value Pr(F)
 Residuals 12  594.049.5
 
 Error: Within
   Df Sum Sq Mean Sq F value Pr(F)
 Residuals 18 381.00   21.17
 
 
 What it dont calculate the F and P for treatment?

Would it be easier to do it this way?

 library(lme4)
Loading required package: Matrix
Loading required package: lattice
 (fm1 - lmer(Glycogen ~ Treatment + (1|Treatment:Rat) + 
 (1|Treatment:Rat:Liver)))
Linear mixed-effects model fit by REML
Formula: Glycogen ~ Treatment + (1 | Treatment:Rat) + (1 | Treatment:Rat:Liver) 
  AIC  BIClogLik MLdeviance REMLdeviance
 231.6213 241.1224 -109.8106234.297 219.6213
Random effects:
 Groups  NameVariance Std.Dev.
 Treatment:Rat:Liver (Intercept) 14.167   3.7639  
 Treatment:Rat   (Intercept) 36.065   6.0054  
 Residual21.167   4.6007  
# of obs: 36, groups: Treatment:Rat:Liver, 18; Treatment:Rat, 6

Fixed effects:
Estimate Std. Error DF t value Pr(|t|)
(Intercept) 140.5000 4.7072 33 29.8481   2e-16
Treatment2   10.5000 6.6569 33  1.5773   0.1243
Treatment3   -5. 6.6569 33 -0.8012   0.4288
 anova(fm1)
Analysis of Variance Table
  Df  Sum Sq Mean Sq   Denom F value  Pr(F)  
Treatment  2 123.993  61.996  33.000   2.929 0.06746

The degrees of freedom for the denominator are an upper bound (in this
case a rather gross upper bound) so the p-value is a lower bound.  It
is on my To Do list to improve tthis but I have a rather long To
Do list.

__
R-help@stat.math.ethz.ch 

Re: [R] The Perils of PowerPoint

2005-09-06 Thread John Sorkin
Please, do not blame PowerPoint for a poorly prepared or delivered talk.
Blame the person who developed the presentation and the person who
delivered the talk. PowerPoint is a tool. It can use used well or it can
be used poorly. If I may quote a once popular newspaper cartoon
character, Pogo, We Have Met The Enemy and He Is Us.
John 

John Sorkin M.D., Ph.D.
Chief, Biostatistics and Informatics
Baltimore VA Medical Center GRECC and
University of Maryland School of Medicine Claude Pepper OAIC

University of Maryland School of Medicine
Division of Gerontology
Baltimore VA Medical Center
10 North Greene Street
GRECC (BT/18/GR)
Baltimore, MD 21201-1524

410-605-7119 
- NOTE NEW EMAIL ADDRESS:
[EMAIL PROTECTED]

 Mulholland, Tom [EMAIL PROTECTED] 09/06 2:26 AM 
For some reason (probably that our organisation has blocked the site) I
could not see the original articles that prompted the post. I however
immediately assumed that this was precipitated by Tufte and his comments
about PowerPoint (I recall seeing a good example of PowerPoint on his
site) http://www.edwardtufte.com/tufte/powerpoint 

When this first came up I recall some dispute about the comments
www.sociablemedia.com/articles_dispute.htm and that John Fox did
something http://ils.unc.edu/~jfox/powerpoint/introduction.html that I
enjoyed reading.

Other links that are lying on my computer are
In defense of PowerPoint
http://www.jnd.org/dn.mss/in_defense_of_powerp.html 
and Does PowerPoint make you stupid? at
http://www.presentations.com/presentations/delivery/article_display.jsp?vnu_content_id=1000482464

 
Tom

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] Behalf Of Tim Churches
 Sent: Saturday, 3 September 2005 10:08 AM
 To: [EMAIL PROTECTED] 
 Cc: Achim Zeileis; r-help@stat.math.ethz.ch 
 Subject: Re: [R] The Perils of PowerPoint
 
 
 (Ted Harding) wrote:
 
 By the way, the Washington Post/Minneapolis Star Tribune article is
 somewhat reminiscent of a short (15 min) broadcast on BBC Radio 4
 back on October 18 2004 15:45-16:00 called
 
   Microsoft Powerpoint and the Decline of Civilisation
 
 which explores similar themes and also frequently quotes Tufte.
 Unfortunately it lapsed for ever from Listen Again after the
 statutory week, so I can't point you to a replay. (However, I
 have carefully preserved the cassette recording I made).
   
 
 Try http://sooper.org/misc/powerpoint.mp3 (copyright law 
 notwithstanding...)
 
 Tim C
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help 
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help 
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] help.search problem

2005-09-06 Thread Uzuner, Tolga
Hi there,

I am using 2.0.1 . However, I was not having this problem with this version of 
R when I first installed it and started using it.

Thanks for your suggestion, I tried it, but that doesn't work either:

 help.search(tps)
Error in rbind(...) : number of columns of matrices must match (see arg 8)
 utils::help.search(tps)
Error in rbind(...) : number of columns of matrices must match (see arg 8)
 

Traceback results below:

Convert Sweave Syntax, Sweave Driver Utilities, Find Objects by 
(Partial) Name, 
   Browse Objects in Environment, Load URL into a WWW Browser, 
   Send a Bug Report, Send output to a character string or file, 
   Writing Package CITATION Files, Citing R and R Packages in 
Publications, 
   Close a Socket, Compare Two Package Version Numbers, 
   Data Sets, Spreadsheet Interface for Entering Data, Post-Mortem 
Debugging, 
   Demonstrations of R Functionality, Download File from the Internet, 
   Invoke a Text Editor, Edit Data Frames and Matrices, 
   Run an Examples Section from the Online Help, Edit One or More 
Files, 
   Fix an Object, Retrieve an R Object, Including from a Namespace, 
   Utility functions for Developing Namespaces, Get An S3 Method, 
   Return the First or Last Part of an Object, Documentation, 
   Search the Help System, Hypertext Documentation, Search Indices for 
Help Files, 
   Find Installed Packages, List Objects and their Structure, 
   Create a Socket Connection, Menu Interaction Function, 
   List Methods for S3 Generic Functions or Classes, Report the Space 
Allocated for an Object, 
   Create a skeleton for a new package, Package Description, 
   Package Management Tools, Invoke a Pager on an R Object, 
   Person Names and Contact Information, Produce Prototype of an R 
Documentation File, 
   Generate a Shell for Documentation of Data Sets, Read fixed-format 
data, 
   Read Fixed Width Format Files, Read from or Write to a Socket, 
   Browsing after an Error, Remove Installed Packages, Load or Save or 
Display the Commands History, 
   Collect Information About the Current R Session, Compactly Display 
the Structure of an Arbitrary R Object, 
   Summarise Output of R Profiler, Converting R Objects to BibTeX or 
LaTeX, 
   Download Packages from CRAN, Display a text URL, Defunct Functions 
in Package utils, 
   Deprecated Functions in Package utils, View or List Vignettes, 
   Batch Execution of R, DLL Version Information, Install Add-on 
Packages from Sources, 
   Remove Add-on Packages, R for Windows Configuration, 
   Build a DLL for Dynamic Loading, Choose a List of Files 
Interactively, 
   Read/Write Text to/from the Windows Clipboard, Get a Windows Handle, 
   Update HTML documentation files, Report on Memory Allocation, 
   Select Items from a List, Set or get the Window Title, 
   Dialog Boxes under Windows, User Menus under Windows, 
   Auxiliary Functions for the Windows Port, build, Rprof, 
   Rtangle, RweaveLatex, Sweave, SweaveSyntConv, SweaveUtils, 
   apropos, browseEnv, browseURL, bug.report, capture.output, 
   citEntry, citation, close.socket, compareVersion, 
   data, data.entry, debugger, demo, download.file, 
   edit, edit.data.frame, example, file.edit, fix, 
   getAnywhere, assignInNamespace, getS3method, head, 
   help, help.search, help.start, index.search, 
installed.packages, 
   ls.str, make.socket, menu, methods, object.size, 
   package.skeleton, packageDescription, packageStatus, 
   page, person, prompt, promptData, read.fortran, 
   read.fwf, read.socket, recover, remove.packages, 
   loadhistory, sessionInfo, str, summaryRprof, toLatex, 
   update.packages, url.show, utils-defunct, utils-deprecated, 
   vignette, BATCH, DLL.version, INSTALL, REMOVE, 
   Rconsole, SHLIB, choose.files, readClipboard, 
getWindowsHandle, 
   link.html.help, memory.size, select.list, setWindowTitle, 
   winDialog, winMenuAdd, flush.console))
2: do.call(rbind, dbMat[, 1])
1: utils::help.search(tps)
 





Please follow the attached hyperlink to an important disclaimer
http://www.csfb.com/legal_terms/disclaimer_europe.shtml



-Original Message-
From: Henrik Bengtsson [mailto:[EMAIL PROTECTED]
Sent: 06 September 2005 15:29
To: Uzuner, Tolga
Cc: 'r-help@stat.math.ethz.ch'
Subject: Re: [R] help.search problem


What version of R and what operating system?  What packages do you have 
loaded?

Try utils::help.search(tps), does that work? Have you tried it in a 
fresh R session, i.e. start with R --vanilla.

If you can't get it to work after this, report the above information 
plus what you get from traceback() after you get the error.

Cheers

Henrik

Uzuner, Tolga wrote:
 Dear Fellow R Users,
 
 I have recently come across a weird problem with help.search:
 
 
help.search(tps)
 
 Error in rbind(...) : number of columns of 

Re: [R] fitting distributions with R

2005-09-06 Thread Ted Harding
On 06-Sep-05 Huntsinger, Reid wrote:
 The MLE of beta is the reciprocal of the sample mean, so you
 don't need an optimizer here.
 
 Reid Huntsinger

While that is true (and Naja clearly knew this), nevertheless
one expects that using an optimiser should also work. Nadja's
observations need an explanation.

If things don't behave as expected, it is worth while embedding
debug prints so as to monitor what is going on internally (as
fas as one can). In this case, if one modifies Nadja's ll
function to

ll-function(beta){
  n-24
  x-data2
  temp-(-n*log(beta)+beta*sum(x))
  print(temp)
  temp
}

and re-runs 'mle', one sees that while there are some numerical
values in the output, there are many NaNs. Also, given the
warning message and the advice to look at warnings(), one
learns that NaNs produced in: log(x) repeatedly. This very
strongly suggests that attempts have been made to take logs
of negative numbers which in trun suggests that the method
of computing the next approximation readily takes the value
of beta outside the valid range of beta  0.

Now is the time to look at ?mle, which says that the default
method is BGFS for which see optim. Under ?optim we learn
that BGFS is a quasi-Newton method. Such methods work by
calculating a local tangent to the derivative function and
extrapolating this until it meets the beta-axis, and this can
easily take the estimate outside admissible ranges (try using
Newton-Raphson to solve sqrt(x) = 0).

However, a related method available for 'optim' is L-BFGS-B
which allows _box constraints_, that is each variable can be
given a lower and/or upper bound. The initial value must satisfy
the constraints. This can be set in a parameter for 'mle'.

So now you can try something like

  est-mle(minuslog=ll, start=list(beta=0.1),
   method=L-BFGS-B, lower=10*(.Machine$double.eps))

and now the trace-prints show a series of numbers, with no NaNs,
so clearly we are getting somewhere (and have identified and
dealt with at least one aspect of the problem). However, observing
the prints, one sees that after an initial trend to convergence
there is a tendency to oscillate between values in the neighbouhood
of beta=360 and values in the neighbourhood of beta=800, finally
failing when two successive values 360.6573 are printed, which
in turn suggests that an attempt is made to comuted a gradient
from identical points. So clearly there is something not right
about how the method works for this particular problem (which,
as a statistical estimation problem, could hardly be simpler!).

Now, ?optim has, at the end, a Note to the effect that the
default method (admittedly Nelder-Mead, which is not relevant
to the above) may not work well and suggests using 'optimize'
instead. So let's try 'optimize' anyway.

Now, with

  optimize(ll,lower=10*(.Machine$double.eps),upper=1e10)

we get a clean set of debug-prints, and convergence to

  beta = 5.881105e-05

with minimum 'll' equal to 254.6480.

Now compare with the known MLE which is

  beta = 1/mean(data2) = 6.766491e-05

giving

  ll(1/mean(data2)) = 254.4226, 

So clearly, now, using 'optimise' instead of 'optim' which
is what 'mle' uses, we are now in the right parish. However,
there is apparently no parameter to 'mle' which would enable
us to force it to use 'optimize' rather than 'optim'!

This interesting saga, provoked by Nadja's query, now raises
an important general question: Given the simplicity of the
problem, why is the use of 'mle' so unexpectedly problematic?

While in the case of an exponential distribution (which has a
well-known analytical solution) one would not want to use 'mle'
to find the MLE (except as a test of 'mle'. perhaps), one can
easily think of other distributions, in form and behaviour very
similar to the negative exponential but without analytical solution,
for which use of 'mle' or some other optimisation routine would
be required. Such distributions could well give rise to similar
problems -- or worse: in Nadja's example,it was clear that it was
not working; in other cases, it might appear to give a result,
but the result might be very wrong and this would not be obvious.

Hmmm.

Ted.

 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Nadja Riedwyl
 Sent: Tuesday, September 06, 2005 9:39 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] fitting distributions with R
 
 
 Dear all
 I've got the dataset
 data:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;1149
 1;
 _ _ _ _ _ 14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334
 I know from other testing that it should be possible to fit the data
 with
 the 
 exponentialdistribution. I tried to get parameterestimates for the 
 exponentialdistribution with R, but as the values 
 of the parameter are very close to 0 i get into troubles. Do you know,
 what
 i 
 could do in order to get estimates?How do you choose the starting
 values? in
 
 my opinion it should be around 1/mean(data).
 
  

Re: [R] simple line plots?

2005-09-06 Thread Earl F. Glynn
Ashish Ranpura [EMAIL PROTECTED] wrote in message
news:[EMAIL PROTECTED]

 I still don't know how to draw
 each of the three line segments I need).

See ?segments

Could you post a small toy problem so we can see exactly what segements
you're wanting to draw?

efg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] help.search problem

2005-09-06 Thread Martin Maechler
 ToUz == Uzuner, Tolga [EMAIL PROTECTED]
 on Tue, 6 Sep 2005 16:35:53 +0100 writes:

ToUz Hi there,
ToUz I am using 2.0.1 . However, I was not having this problem with this 
version of R when I first installed it and started using it.

yes.  It only happens because of an ``incorrectly installed package'' 
installed somewhere in
your
.libPaths()

and you may have ``wrong-installed'' it only recently.

If you would upgrade to R 2.1.1, the problem would go away,
insofar as  help.start() would report about the package(s) with
invalid installation.

Otherwise (in R 2.0.1), it's somewhat tedious to find IIRC:
You may set
options(error = recover)
immediately before 
help.start()

and then inspect the pretty large matrix with the invalid entry
leading to the error.
The matrix has one row per package, and so you can find the
invalid package.

Once you know that, remove the package, and try again.

[As hinted at, you should rather upgrade R]

Martin Maechler


ToUz Thanks for your suggestion, I tried it, but that doesn't work either:

 help.search(tps)
ToUz Error in rbind(...) : number of columns of matrices must match (see 
arg 8)
 utils::help.search(tps)
ToUz Error in rbind(...) : number of columns of matrices must match (see 
arg 8)
 

ToUz Traceback results below:

ToUz Convert Sweave Syntax, Sweave Driver Utilities, Find Objects by 
(Partial) Name, 

.
.

ToUz winDialog, winMenuAdd, flush.console))
ToUz 2: do.call(rbind, dbMat[, 1])
ToUz 1: utils::help.search(tps)
 





ToUz -Original Message-
ToUz From: Henrik Bengtsson [mailto:[EMAIL PROTECTED]
ToUz Sent: 06 September 2005 15:29
ToUz To: Uzuner, Tolga
ToUz Cc: 'r-help@stat.math.ethz.ch'
ToUz Subject: Re: [R] help.search problem


ToUz What version of R and what operating system?  What packages do you 
have 
ToUz loaded?

ToUz Try utils::help.search(tps), does that work? Have you tried it in a 
ToUz fresh R session, i.e. start with R --vanilla.

ToUz If you can't get it to work after this, report the above information 
ToUz plus what you get from traceback() after you get the error.

ToUz Cheers

ToUz Henrik

ToUz Uzuner, Tolga wrote:
 Dear Fellow R Users,
 
 I have recently come across a weird problem with help.search:
 
 
 help.search(tps)
 
 Error in rbind(...) : number of columns of matrices must match (see arg 
8)
 
 
 This happens no matter what I search for...
 
 Any thoughts ?
 Thanks,
 Tolga

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fitting distributions with R

2005-09-06 Thread Huntsinger, Reid
In optim you need to set ndeps (the delta x parameter controlling the
finite-difference approximation) to a sufficiently small value (or supply
the gradient yourself to avoid finite differences, which are messy on a
restricted parameter space.) Since you expect a minimum at about 6.7e-5 the
default ndeps=1e-3 is definitely too large.

 optim(par=0.1,fn=ll,method=BFGS,control=list(ndeps=1e-6))
$par
[1] 6.76644e-05

$value
[1] 254.4226

$counts
function gradient 
 136   18 

$convergence
[1] 0

$message
NULL

There were 50 or more warnings (use warnings() to see the first 50)
 

The warnings are NaNs produced in: log(x) which can be avoided by making
sure the function doesn't try to take the log of something = 0, for example
change the last line to 

ifelse(beta  0, -n*log(beta)+beta*sum(x), Inf)

and then optim is happy.

From mle() you can pass control to optim via ...

Reid Huntsinger

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
[EMAIL PROTECTED]
Sent: Tuesday, September 06, 2005 11:46 AM
To: r-help@stat.math.ethz.ch
Cc: Nadja Riedwyl
Subject: Re: [R] fitting distributions with R


On 06-Sep-05 Huntsinger, Reid wrote:
 The MLE of beta is the reciprocal of the sample mean, so you
 don't need an optimizer here.
 
 Reid Huntsinger

While that is true (and Naja clearly knew this), nevertheless
one expects that using an optimiser should also work. Nadja's
observations need an explanation.

If things don't behave as expected, it is worth while embedding
debug prints so as to monitor what is going on internally (as
fas as one can). In this case, if one modifies Nadja's ll
function to

ll-function(beta){
  n-24
  x-data2
  temp-(-n*log(beta)+beta*sum(x))
  print(temp)
  temp
}

and re-runs 'mle', one sees that while there are some numerical
values in the output, there are many NaNs. Also, given the
warning message and the advice to look at warnings(), one
learns that NaNs produced in: log(x) repeatedly. This very
strongly suggests that attempts have been made to take logs
of negative numbers which in trun suggests that the method
of computing the next approximation readily takes the value
of beta outside the valid range of beta  0.

Now is the time to look at ?mle, which says that the default
method is BGFS for which see optim. Under ?optim we learn
that BGFS is a quasi-Newton method. Such methods work by
calculating a local tangent to the derivative function and
extrapolating this until it meets the beta-axis, and this can
easily take the estimate outside admissible ranges (try using
Newton-Raphson to solve sqrt(x) = 0).

However, a related method available for 'optim' is L-BFGS-B
which allows _box constraints_, that is each variable can be
given a lower and/or upper bound. The initial value must satisfy
the constraints. This can be set in a parameter for 'mle'.

So now you can try something like

  est-mle(minuslog=ll, start=list(beta=0.1),
   method=L-BFGS-B, lower=10*(.Machine$double.eps))

and now the trace-prints show a series of numbers, with no NaNs,
so clearly we are getting somewhere (and have identified and
dealt with at least one aspect of the problem). However, observing
the prints, one sees that after an initial trend to convergence
there is a tendency to oscillate between values in the neighbouhood
of beta=360 and values in the neighbourhood of beta=800, finally
failing when two successive values 360.6573 are printed, which
in turn suggests that an attempt is made to comuted a gradient
from identical points. So clearly there is something not right
about how the method works for this particular problem (which,
as a statistical estimation problem, could hardly be simpler!).

Now, ?optim has, at the end, a Note to the effect that the
default method (admittedly Nelder-Mead, which is not relevant
to the above) may not work well and suggests using 'optimize'
instead. So let's try 'optimize' anyway.

Now, with

  optimize(ll,lower=10*(.Machine$double.eps),upper=1e10)

we get a clean set of debug-prints, and convergence to

  beta = 5.881105e-05

with minimum 'll' equal to 254.6480.

Now compare with the known MLE which is

  beta = 1/mean(data2) = 6.766491e-05

giving

  ll(1/mean(data2)) = 254.4226, 

So clearly, now, using 'optimise' instead of 'optim' which
is what 'mle' uses, we are now in the right parish. However,
there is apparently no parameter to 'mle' which would enable
us to force it to use 'optimize' rather than 'optim'!

This interesting saga, provoked by Nadja's query, now raises
an important general question: Given the simplicity of the
problem, why is the use of 'mle' so unexpectedly problematic?

While in the case of an exponential distribution (which has a
well-known analytical solution) one would not want to use 'mle'
to find the MLE (except as a test of 'mle'. perhaps), one can
easily think of other distributions, in form and behaviour very
similar to the negative exponential but without analytical solution,
for which use 

[R] Revised shapefiles package

2005-09-06 Thread Ben Stabler
Now available on CRAN is a revised version of the shapefiles package for
reading and writing shapefiles in R.  New additions, courtesy of others,
include the ability to convert a simple R data frame of points,
polylines or polygons to a shp format list, which can then be written
out to a shapefile with write.shp.  There is also a function to convert
the read.shp shp format list to a simple data frame as well.  In
addition, there is a basic implementation of the Douglas-Peucker
polyline (and polygon) simplification routine.  Also, through the help
of others, there is now the ability to read and write polyline Z and
polygon Z format shapefiles.  The read.dbf and write.dbf functions in
the foreign library are now used for dbf I/O, which significantly
improves the speed.  There are probably a few bugs in there that I did
not catch, so please email me if you find them.  Thanks.

Ben Stabler
Project Manager
PTV America, Inc.
1128 NE 2nd St, Suite 204
Corvallis, OR 97330
541-754-6836 x205
541-754-6837 fax
 http://www.ptvamerica.com/ www.ptvamerica.com

 

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] fitting distributions with R

2005-09-06 Thread Thomas Lumley
On Tue, 6 Sep 2005, [EMAIL PROTECTED] wrote:


 However, a related method available for 'optim' is L-BFGS-B
 which allows _box constraints_, that is each variable can be
 given a lower and/or upper bound. The initial value must satisfy
 the constraints. This can be set in a parameter for 'mle'.

These box constraints are really designed for situations where the 
boundary is a valid parameter value (so you are really doing constrained 
estimation) rather than situations where the boundary is an artifact of 
parameterisation.

 This interesting saga, provoked by Nadja's query, now raises
 an important general question: Given the simplicity of the
 problem, why is the use of 'mle' so unexpectedly problematic?


The problem is simple only in that it is one-dimensional, and optim() 
doesn't take advantage of this.  It is poorly scaled: since the starting 
value is 0.1, the maximum is at 0.6, and there is a singularity at 0, 
it would be helpful to specify the parscale control option to optim.

The other problem is that we are using finite-difference approximations to 
the derivatives. These are bound to perform badly near the singularity at 
zero, especially in a badly scaled problem.  There is a bug in that 
L-BFGS-B doesn't respect the bounds in computing finite-differences, but 
this is not going to be easy to fix (there was recent discussion on 
r-devel about this).

If I remove the singularity by defining

 lll
function(beta) if(beta0) 1e6 else ll(beta)

and specify parscale, I get
 est

Call:
mle(minuslogl = lll, start = list(beta = 0.01), control = list(parscale = 
1e-05))

Coefficients:
 beta
6.767725e-05

(Any parscale below 0.01 will give basically the same answer).


Incidentally, the trace output may look as if it is oscillating, but that 
is partly an artifact of the line search that BFGS uses.  The last few 
printed loglikelihoods are
[1] 254.4226
[1] 254.4226
[1] 543.2361
[1] 542.5717


Finally, as I noted earlier, this isn't really a constrained estimation 
problem, it is a problem of a function defined on an open interval with a 
singularity at one end.  In this case (in contrast to real constrained 
estimation problems) it might well be sensible to reparametrize.  mle() 
then works with no problems.

-thomas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2005-09-06 Thread Nadja Riedwyl
my problem actually arised with fitting the data to the weibulldistribution, 
where it is hard to see, if the proposed parameterestimates make sense.

data1:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;11491;
          14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334

how am I supposed to know what starting values i have to take?
i get different parameterestimates depending on the starting values i choose, 
this shouldn't be, no? how am i supposed to know, which the right estimates 
should be?


 library(MASS)
 fitdistr(data2,densfun=dweibull,start=list(scale=2 ,shape=1 ))
  scale  shape
  1.378874e+04   8.788857e-01
 (3.842224e+03) (1.312395e-01)

 fitdistr(data2,densfun=dweibull,start=list(scale=6 ,shape=2 ))
 scaleshape
  7.81875000   0.1250
 (4.18668905) (0.01803669)

#if i use the lognormaldistribution instead, i would get the same estimates, 
#no matter, what starting values i choose.

#or if i tried it so fare with mle(), i got different values depending on the 
#starting values too, i use the trial and error method to find appropriate 
#starting values, but i am sure, there is a clear way how to do it, no?
#shouldn't i actually get more or less the same parameterestimates with both 
#methods?
 library(stats4)
 ll-function(alfa,beta)
+ {n-24
+ x-data2
+ -n*log(alfa)-n*log(beta)+alfa*sum(x^beta)-(beta-1)*sum(log(x))}
 est-mle(minuslog=ll, start=list(alfa=10, beta=1))
There were 50 or more warnings (use warnings() to see the first 50)
 summary(est)
Maximum likelihood estimation

Call:
mle(minuslogl = ll, start = list(alfa = 10, beta = 1))

Coefficients:
Estimate   Std. Error
alfa 0.002530163 0.0006828505
beta 0.641873010 0.0333072184

-2 log L: 511.6957

 library(stats4)
 ll-function(alfa,beta)
+ {n-24
+ x-data2
+ -n*log(alfa)-n*log(beta)+alfa*sum(x^beta)-(beta-1)*sum(log(x))}
 est-mle(minuslog=ll, start=list(alfa=5, beta=17))
There were 50 or more warnings (use warnings() to see the first 50)
 summary(est)
Maximum likelihood estimation

Call:
mle(minuslogl = ll, start = list(alfa = 5, beta = 17))

Coefficients:
Estimate  Std. Error
alfa 0.002143305 0.000378592
beta 0.660359789 0.026433665

-2 log L: 511.1296


thank you very much for all your comments, it really helps me to get further!
Nadja

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] (no subject)

2005-09-06 Thread Christian Hennig
Dear Nadja,

if the loglikelihood function has various local maxima, the result
may depend on the starting values. This is not unusual. The best estimator
is the one with the maximum loglikelihood, i.e., the smallest value of
-2 log L in the mle output. (Unfortunately, it seems that the
loglikelihood value is not accessible using fitdistr - you would have to
implement the loglikelihood function on you own.)

You could use a lot of starting values, for example generated by some
random mechanism, and take the best estimator.
If you want a single good starting value, you could try to fit a Weibull
distribution by eye  and trial-and error to the histogram and use the
corresponding parameters.

Best,
Christian

PS: Please use informative subject lines.

On Tue, 6 Sep 2005, Nadja Riedwyl wrote:

 my problem actually arised with fitting the data to the weibulldistribution,
 where it is hard to see, if the proposed parameterestimates make sense.

 data1:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;14542;694;11491;
           14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334

 how am I supposed to know what starting values i have to take?
 i get different parameterestimates depending on the starting values i choose,
 this shouldn't be, no? how am i supposed to know, which the right estimates
 should be?


  library(MASS)
  fitdistr(data2,densfun=dweibull,start=list(scale=2 ,shape=1 ))
   scale  shape
   1.378874e+04   8.788857e-01
  (3.842224e+03) (1.312395e-01)

  fitdistr(data2,densfun=dweibull,start=list(scale=6 ,shape=2 ))
  scaleshape
   7.81875000   0.1250
  (4.18668905) (0.01803669)

 #if i use the lognormaldistribution instead, i would get the same estimates,
 #no matter, what starting values i choose.

 #or if i tried it so fare with mle(), i got different values depending on the
 #starting values too, i use the trial and error method to find appropriate
 #starting values, but i am sure, there is a clear way how to do it, no?
 #shouldn't i actually get more or less the same parameterestimates with both
 #methods?
  library(stats4)
  ll-function(alfa,beta)
 + {n-24
 + x-data2
 + -n*log(alfa)-n*log(beta)+alfa*sum(x^beta)-(beta-1)*sum(log(x))}
  est-mle(minuslog=ll, start=list(alfa=10, beta=1))
 There were 50 or more warnings (use warnings() to see the first 50)
  summary(est)
 Maximum likelihood estimation

 Call:
 mle(minuslogl = ll, start = list(alfa = 10, beta = 1))

 Coefficients:
 Estimate   Std. Error
 alfa 0.002530163 0.0006828505
 beta 0.641873010 0.0333072184

 -2 log L: 511.6957

  library(stats4)
  ll-function(alfa,beta)
 + {n-24
 + x-data2
 + -n*log(alfa)-n*log(beta)+alfa*sum(x^beta)-(beta-1)*sum(log(x))}
  est-mle(minuslog=ll, start=list(alfa=5, beta=17))
 There were 50 or more warnings (use warnings() to see the first 50)
  summary(est)
 Maximum likelihood estimation

 Call:
 mle(minuslogl = ll, start = list(alfa = 5, beta = 17))

 Coefficients:
 Estimate  Std. Error
 alfa 0.002143305 0.000378592
 beta 0.660359789 0.026433665

 -2 log L: 511.1296


 thank you very much for all your comments, it really helps me to get further!
 Nadja

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


*** --- ***
Christian Hennig
University College London, Department of Statistical Science
Gower St., London WC1E 6BT, phone +44 207 679 1698
[EMAIL PROTECTED], www.homepages.ucl.ac.uk/~ucakche

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] (no subject)

2005-09-06 Thread Berton Gunter


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Nadja Riedwyl
 Sent: Tuesday, September 06, 2005 10:22 AM
 To: r-help@stat.math.ethz.ch
 Subject: [R] (no subject)
 
 my problem actually arised with fitting the data to the 
 weibulldistribution, 
 where it is hard to see, if the proposed parameterestimates 
 make sense.
 
 data1:2743;4678;21427;6194;10286;1505;12811;2161;6853;2625;145
 42;694;11491;
           
 14924;28640;17097;2136;5308;3477;91301;11488;3860;64114;14334
 
 how am I supposed to know what starting values i have to take?
 i get different parameterestimates depending on the starting 
 values i choose, 
 this shouldn't be, no? how am i supposed to know, which the 
 right estimates 
 should be?
 

This is a general issue with all (gradient-based) optimization methods when
the response to be optimized has many local optima and/or is poorly
conditioned. As Doug Bates and others have often remarked, finding good
starting values is an art that is often problem-specific. Ditto for good
parameterizations. There is no universal magic answer.

In many respects, this is the monster hiding in the closet of many of the
complex modeling methods being proposed in statistics and other disciplines:
when the response function to be optimized is a nonlinear function of many
parameters, convergence may be difficult to achieve. Presumably stochastic
optimization methods like simulated annealing and mcmc are less susceptible
to such problems, but they pay a large efficiency price to be so.

Cheers,

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Mike Waters
 And thus to that 'New Age' Management Role, that of the Professional
PowePoint Ranger. He (invariably he) who culls the fruits of the labours of
others to present in ever more slick PowerPoint compendia, whilst never
sullying their hands with 'real' work.

8¬

Mike

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of bogdan romocea
 Sent: 06 September 2005 18:43
 To: R-help@stat.math.ethz.ch
 Subject: Re: [R] The Perils of PowerPoint
 
 I don't understand why there's so much discussion on 
 PowerPoint. IMHO, that can only obscure the real thing:
   - The Perils of Miscommunication
   - The Perils of Not Taking Responsibility (if 
 PowerPoint is to blame for X, then who's to blame for 
 choosing and using PowerPoint in the first place?)
   - The Perils of Being an Idiot
   - and so on.
 (I'm in grave danger here, and also responsible for using R.)
 
 
  -Original Message-
  From: Mulholland, Tom [mailto:[EMAIL PROTECTED]
  Sent: Tuesday, September 06, 2005 2:27 AM
  Cc: Achim Zeileis; r-help@stat.math.ethz.ch
  Subject: Re: [R] The Perils of PowerPoint
  
  
  For some reason (probably that our organisation has blocked 
 the site) 
  I could not see the original articles that prompted the post. I 
  however immediately assumed that this was precipitated by Tufte and 
  his comments about PowerPoint (I recall seeing a good example of 
  PowerPoint on his site) http://www.edwardtufte.com/tufte/powerpoint
  
  When this first came up I recall some dispute about the comments 
  www.sociablemedia.com/articles_dispute.htm and that John Fox did 
  something 
 http://ils.unc.edu/~jfox/powerpoint/introduction.html that I 
  enjoyed reading.
  
  Other links that are lying on my computer are In defense of 
  PowerPoint
  http://www.jnd.org/dn.mss/in_defense_of_powerp.html
  and Does PowerPoint make you stupid? at 
  http://www.presentations.com/presentations/delivery/article_di
 splay.jsp?vnu_content_id=1000482464
  
 Tom
 
  -Original Message-
  From: [EMAIL PROTECTED] 
  [mailto:[EMAIL PROTECTED] Behalf Of Tim Churches
  Sent: Saturday, 3 September 2005 10:08 AM
  To: [EMAIL PROTECTED]
  Cc: Achim Zeileis; r-help@stat.math.ethz.ch
  Subject: Re: [R] The Perils of PowerPoint
  
  
  (Ted Harding) wrote:
  
  By the way, the Washington Post/Minneapolis Star Tribune 
 article is 
  somewhat reminiscent of a short (15 min) broadcast on BBC Radio 4 
  back on October 18 2004 15:45-16:00 called
  
Microsoft Powerpoint and the Decline of Civilisation
  
  which explores similar themes and also frequently quotes Tufte.
  Unfortunately it lapsed for ever from Listen Again after the 
  statutory week, so I can't point you to a replay. (However, I have 
  carefully preserved the cassette recording I made).

  
  Try http://sooper.org/misc/powerpoint.mp3 (copyright law
  notwithstanding...)
  
  Tim C
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Anon.
Mike Waters wrote:

 And thus to that 'New Age' Management Role, that of the Professional
PowePoint Ranger. He (invariably he) who culls the fruits of the labours of
others to present in ever more slick PowerPoint compendia, whilst never
sullying their hands with 'real' work.

  

In academia they're known as professors.

Bob

-- 
Bob O'Hara
Department of Mathematics and Statistics
P.O. Box 68 (Gustaf Hällströmin katu 2b)
FIN-00014 University of Helsinki
Finland

Telephone: +358-9-191 51479
Mobile: +358 50 599 0540
Fax:  +358-9-191 51400
WWW:  http://www.RNI.Helsinki.FI/~boh/
Journal of Negative Results - EEB: www.jnr-eeb.org

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Predicting responses using ace

2005-09-06 Thread Luis Pineda
Hello everybody,

I'm a new user of R and I'm working right now with the ACE function
from the acepack library. I Have a question: Is there a way to predict
new responses using ACE? What I mean is doing something similar to the
following code that uses PPR (Projection Pursuit Regression):

library(MASS)
x - runif(20, 0, 1)
xnew - runif(2000, 0, 1)
y - sin(x)
a - ppr(x, y, 2)
ynew - predict(ppr, xnew)

Any help would be much appretiated, Thanks in advance,
Luis Pineda

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Spacing and margins in plot

2005-09-06 Thread Raubertas, Richard
You can do this with the 'mgp' argument to par()  (see ?par).
For example, I find par(mgp=c(2, 0.75, 0)) (which puts the
axis label on line 2 and the axis values on line 0.75) nicely
tightens up the space around a plot.

Rich Raubertas

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Earl F. Glynn
 Sent: Thursday, September 01, 2005 11:14 AM
 To: r-help@stat.math.ethz.ch
 Subject: Re: [R] Spacing and margins in plot
 
 
 Chris Wallace [EMAIL PROTECTED] wrote in message
 news:[EMAIL PROTECTED]
 
  how about
  plot(..., xlab=)
  title(xlab=label text, line=2)
 
 Yes, Chris, I like your idea, especially when I can fix 
 both X and Y axes
 at the same time:
 
   plot(0, xlab=,ylab=)
   title(xlab=X axis, ylab=Y axis, line=2)
 
 I'd prefer a way to set the axis title line at the same time 
 I change the
 mar parameters, but it's not a big deal.
 
 Thanks.
 efg
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] R Cocoa GUI assumes Japanese locale

2005-09-06 Thread Jacob Etches
After installing the latest binary for OS X, the R Cocoa GUI provides 
output in the console in Japanese only.  I would prefer the output to 
be in English, but cannot figure out how to change the setting.  If I 
start R in a terminal window, output is in English.

Version is R Cocoa GUI 1.12 (1622), S.M.Iacus  S.Urbanek.

Thanks for any help.

Jacob Etches

Doctoral candidate, Epidemiology
Department of Public Health Sciences
University of Toronto Faculty of Medicine

Research Associate
Institute for Work  Health
800-481 University Ave.
Toronto, ON
M5G 2E9
416.927.2027x2290
www.iwh.on.ca

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] LV path analysis with PLS (was Re: PLSR: model notation and reliabilities)

2005-09-06 Thread I.Ioannou
On Wed, Aug 31, 2005 at 12:31:29PM +0300, I.Ioannou wrote:
 On Mon, Aug 29, 2005 at 08:08:53AM +0200, Bj?rn-Helge Mevik wrote:
  
  It seems to me that what you are looking for, is some sort of
  structured equation models (? la Lisrel).  The pls package implements
--snipped--
 and the explained variance seem to be ok, but I'm afraid that 
 this is not my case. I thought that plsr should be used to perform 
--snipped--

Well, I should had asked  : Is there a way to use plsr to perform 
(or another R package that implements) latent variables path analysis 
with partial least-squares estimation, i.e. the algorithm that was 
implemented in the old DOS lvpls program ?
(http://kiptron.psyc.virginia.edu/Programs/lvplsmanual.pdf) 

TIA
Ioannis Ioannou

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] The Perils of PowerPoint

2005-09-06 Thread Mulholland, Tom
I incorrectly relied upon my memory 

...
 and that 
 John Fox did something 
 http://ils.unc.edu/~jfox/powerpoint/introduction.html that I 
 enjoyed reading.

The work is that of Jackson Fox

Tom

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Predicting responses using ace

2005-09-06 Thread Frank E Harrell Jr
Luis Pineda wrote:
 Hello everybody,
 
 I'm a new user of R and I'm working right now with the ACE function
 from the acepack library. I Have a question: Is there a way to predict
 new responses using ACE? What I mean is doing something similar to the
 following code that uses PPR (Projection Pursuit Regression):
 
 library(MASS)
 x - runif(20, 0, 1)
 xnew - runif(2000, 0, 1)
 y - sin(x)
 a - ppr(x, y, 2)
 ynew - predict(ppr, xnew)
 
 Any help would be much appretiated, Thanks in advance,
 Luis Pineda
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

Look at the areg.boot function in the Hmisc package, and its associated 
predict method.

-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] convergence for proportional odds model

2005-09-06 Thread David Duffy
liu abc [EMAIL PROTECTED] wrote:

 I am using proportional odds model for ordinal responses in
 dose-response experiments. For some samll data, SAS can successfully
 provide estimators of the parameters, but the built-in function polr()
 in R fails. Would you like to tell me how to make some change so I
 can use polr() to obtain the estimators? Or anyone can give me a hint
 about the conditions for the existance of MLE in such a simple case?
 By the way, for the variable resp which must be ordered factor, how
 can I do it? Thanks a lot.

 Guohui

 The following is one example I used both in SAS and R.

 in R:

 library(MASS)
 dose.resp = matrix( c(1,1,1,1,2,2,2,3,3,3, 2,2,3,3,4,4,5,4,5,5), ncol=2)
 colnames(dose.resp)= c(resp, dose)
 polr( factor(resp, ordered=T)~dose, data=dose.resp)
 #Error in optim(start, fmin, gmin, method = BFGS, hessian = Hess, ...) :
 # initial value in 'vmmin' is not finite

It seems to be the starting values.  Using lrm() from the Design package gave

 dose.resp - as.data.frame(dose.resp)
 dose.resp$resp - factor(dose.resp$resp)
 library(Design)
 lrm(resp ~ dose, data=dose.resp)

   Obs  Max Deriv Model L.R.   d.f.  P  CDxy
10  6e-06  11.43  1  7e-04  0.909  0.818
 Gamma  Tau-a R2  Brier
 0.9310.6  0.768  0.014

 CoefS.E.  Wald Z P
y=2 -10.904 5.137 -2.12  0.0338
y=3 -14.336 6.287 -2.28  0.0226
dose   3.160 1.399  2.26  0.0239

and giving polr starting values:


 print(m1 - polr(resp ~ dose, data=dose.resp, start=c(-1, -4, 3)))
Call:
polr(formula = resp ~ dose, data = dose.resp, start = c(-1, -4,
3))

Coefficients:
dose
3.158911

Intercepts:
 1|2  2|3
10.90172 14.33296

Residual Deviance: 10.34367
AIC: 16.34367

Even then, summary(m1) gives the same problem (as it refits).  There is
separation in the data, of course, but I presume the ordinality gives
some extra information.

David Duffy.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Sorting Text Frames

2005-09-06 Thread Murray Jorgensen
[Using 2.0.1 under Windows XP]
There are a few pages on the internet that list equivalents of
thank you in many languages. I downloaded one from a Google search
and I thought that it would be interesting and a good R exercise to
sort the file into the order of the expressions, rather than the languages.

I tidied up the web page and got it into the format that it was nearly
in: Language Name in columns 1-43, the expression in the remaining
columns.

Then I read it in:

  thanks - read.fwf(C:\\Files\\Reading\\thankyou.txt, c(43,37))
  thanks[1:4,]
V1V2
1 Abenaki (Maine USA, Montreal Canada)Wliwni ni
2 Abenaki (Maine USA, Montreal Canada)   Wliwni
3 Abenaki (Maine USA, Montreal Canada)   Oliwni
4 Achí (Baja Verapaz Guatemala)   Mantiox chawe

  dim(thanks)
[1] 12542

Now I tried sorting the frame into the order of the second column:

tord - order(thanks$V2)
sink(C:\\Files\\Reading\\thanks.txt)
thanks[tord[1:74],]
sink()

This gives more or less the expected output, the file thanks.txt beginning

   V1 
V2
145  Cahuila (United States)'\301cha-ma
862  Paipai (Mexico, USA)'Ara'ya:ikm
863  Paipai (Mexico, USA)'Ara'yai:km
864  Paipai (Mexico, USA) 'Ara'ye:km
311  Eyak (Alaska)'Awa'ahdah

[you may get a bit of wrapping there!]

However I don't really want just 74 lines, I would like the whole file. But
if I get rid of the [1:74] or replace 74 with any larger number I get 
output
like this, with no second column:

   V1
145  Cahuila (United States)
862  Paipai (Mexico, USA)
863  Paipai (Mexico, USA)
864  Paipai (Mexico, USA)
311  Eyak (Alaska)

Does anyone know what is going on?
Tusen tak in advance, in fact 1254 tak in advance!

Murray Jorgensen
-- 
Dr Murray Jorgensen  http://www.stats.waikato.ac.nz/Staff/maj.html
Department of Statistics, University of Waikato, Hamilton, New Zealand
Email: [EMAIL PROTECTED]Fax 7 838 4155
Phone  +64 7 838 4773 wk Home +64 7 825 0441   Mobile 021 1395 862

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Lattice key seems to ignore the key list

2005-09-06 Thread Patrick Connolly
I've never had this problem before and can't see what could be
different from other times I've used keys with lattice.



It appears that auto.key is being taken as TRUE when I specify a key
list.  The list I specify seems to be ignored.

Where can I place a browser to figure out what is going on?

Having made a list key.symbol from trellis.par.get, and specified a
scales list and a between list, and a formula object (form), I use
xyplot like this:

xyplot(form, data = xx, groups = Entry, layout = c(8,8, 1), 
par.strip.text = list(cex = .65), between = between,
scales = scales,
panel = function(x, y, ...)
  panel.superpose(x, y, ...),
key = list(points = Rows(key.symbol, 1:4),
  text = list(levels(xx$Entry),
space = right, columns = 1))
)

What is implied in there that would set auto.key to TRUE?  The space
and columns part of the list seems to be ignored and the autokey
values substituted.  

Ideas, please.

Thanks.

-- 
Patrick Connolly
HortResearch
Mt Albert
Auckland
New Zealand 
Ph: +64-9 815 4200 x 7188
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~
I have the world`s largest collection of seashells. I keep it on all
the beaches of the world ... Perhaps you`ve seen it.  ---Steven Wright 
~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~.~

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] (no subject)

2005-09-06 Thread Salang Pan
hi,
  
  Is it possible to draw a string text in a rectangle according the width of 
this rectangle?   that is, the fontsize of this string text can be adjusted 
according the width of the rectangle.
 How to set the cex parameter in text function? 

text (x, y = NULL, labels = seq(along = x), adj = NULL,
  pos = NULL, offset = 0.5, vfont = NULL,
  cex = 1, col = NULL, font = NULL, xpd = NULL, ...)

   thanks!





 
= 
 Salang
 [EMAIL PROTECTED]

 Tel: 021-64363311-123
 Shanghai Center for Bioinformatics Technology
 Floor 12th,100# QinZhou Road
 Shanghai,China,200235

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Leading in line-wrapped Lattice value and panel labels

2005-09-06 Thread Tim Churches
Version 2.1.1
Platforms: all

What is the trellis parameter (or is there a trellis parameter) to set the 
leading (the gap between lines) when long axis values labels or panel header 
labels wrap over more than one line? By default, there is a huge gap between 
lines, and much looking and experimentation has not revealed to me a suitable 
parameter to adjust this.

Tim C

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html