Re: [R] (no subject)

2006-01-12 Thread Christoph Lehmann
type ?par and then have a look at:
cex.lab, cex.main

cheers
christoph
[EMAIL PROTECTED] wrote:
 Dear ladies and gentlemen!
 When I use the plot funtion how can I change the size of the title for the x 
 and
 y axes (xlab, ylab)and the size of the axes label ?
 
 Thank you very much.
 
 With best regards
 
 Claudia
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Strange behaviour of load

2006-01-12 Thread Prof Brian Ripley
On Wed, 11 Jan 2006, giovanni parrinello wrote:

 Dear All,
 simetimes when I load an Rdata I get this message

 ###
 Code:

 load('bladder1.RData')
 Carico il pacchetto richiesto: rpart ( Bad traslastion: Load required 
 package-...)
 Carico il pacchetto richiesto: MASS
 Carico il pacchetto richiesto: mlbench
 Carico il pacchetto richiesto: survival
 Carico il pacchetto richiesto: splines

 Carico il pacchetto richiesto: 'survival'


The following object(s) are masked from package:Hmisc :

 untangle.specials

 Carico il pacchetto richiesto: class
 Carico il pacchetto richiesto: nnet
 #

 So  I have many unrequired packages loaded.
 Any idea?

They are required!  My guess is that you have object(s) saved with 
environment the namespace of some package, and loading that namespace is 
pulling these in.  The only CRAN package which requires mlbench appears to 
be ipred, and that requires all of those except splines, required by 
survival.

So I believe you have been using ipred and have saved a reference to its 
namespace.

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] dataframes with only one variable

2006-01-12 Thread Erich Neuwirth
Subsetting from a dataframe with only one variable
returns a vector, not a dataframe.
This seems somewhat inconsistent.
Wouldn't it be better if subsetting would respect
the structure completely?


v1-1:4
v2-4:1
df1-data.frame(v1)
df2-data.frame(v1,v2)
sel1-c(TRUE,TRUE,TRUE,TRUE)

 df1[sel1,]
[1] 1 2 3 4
 df2[sel1,]
  v1 v2
1  1  4
2  2  3
3  3  2
4  4  1

-- 
Erich Neuwirth
Institute for Scientific Computing and
Didactic Center for Computer Science
University of Vienna
phone: +43-1-4277-39464  fax: +43-1-4277-39459

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] dataframes with only one variable

2006-01-12 Thread TEMPL Matthias
 Subsetting from a dataframe with only one variable
 returns a vector, not a dataframe.
 This seems somewhat inconsistent.
 Wouldn't it be better if subsetting would respect
 the structure completely?
 
 
 v1-1:4
 v2-4:1
 df1-data.frame(v1)
 df2-data.frame(v1,v2)
 sel1-c(TRUE,TRUE,TRUE,TRUE)
 
  df1[sel1,]


df1[[sel1, , drop=FALSE]

Should do what you want.

Best,
Matthias

 [1] 1 2 3 4
  df2[sel1,]
   v1 v2
 1  1  4
 2  2  3
 3  3  2
 4  4  1
 
 -- 
 Erich Neuwirth
 Institute for Scientific Computing and
 Didactic Center for Computer Science
 University of Vienna
 phone: +43-1-4277-39464  fax: +43-1-4277-39459
 
 __
 R-help@stat.math.ethz.ch mailing list 
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read 
 the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] F-test degree of freedoms in lme4 ?

2006-01-12 Thread Christoph Buser
Dear Wilhelm

There is an article, including a part about fitting linear mixed
models. There the problem with the degrees of freedom is
described.
You can have a look to the second link, too, discussing the
problem as well.

http://cran.r-project.org/doc/Rnews/Rnews_2005-1.pdf
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/67414.html

Regards,

Christoph Buser

--
Christoph Buser [EMAIL PROTECTED]
Seminar fuer Statistik, LEO C13
ETH (Federal Inst. Technology)  8092 Zurich  SWITZERLAND
phone: x-41-44-632-4673 fax: 632-1228
http://stat.ethz.ch/~buser/
--


Wilhelm B. Kloke writes:
  I have a problem moving from multistratum aov analysis to lmer.
  
  My dataset has observations of ampl at 4 levels of gapf and 2 levels of bl
  on 6 subjects levels VP, with 2 replicates wg each, and is balanced.
  
  Here is the summary of this set with aov:
   summary(aov(ampl~gapf*bl+Error(VP/(bl*gapf)),hframe2))
  
  Error: VP
Df Sum Sq Mean Sq F value Pr(F)
  Residuals  5531 106   
  
  Error: VP:bl
Df Sum Sq Mean Sq F value Pr(F)   
  bl 1   1700170037.8 0.0017 **
  Residuals  5225  45  
  ---
  Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
  
  Error: VP:gapf
Df Sum Sq Mean Sq F value  Pr(F)
  gapf   3933 31124.2 5.3e-06 ***
  Residuals 15193  13
  ---
  Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
  
  Error: VP:bl:gapf
Df Sum Sq Mean Sq F value Pr(F)  
  gapf:bl3   93.931.33.68  0.036 *
  Residuals 15  127.6 8.5 
  ---
  Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 
  
  Error: Within
Df Sum Sq Mean Sq F value Pr(F)
  Residuals 48318   7   
  
  This is mostly identical the analysis by BMDP 4V, except for the
  Greenhouse-Geisser epsilons, which are not estimated this way.
  
  I have to analyse a similar dataset, which is not balanced. So I need to
  change the method. Following Pinheiro/Bates p.90f, I tried
   hf2.lme - 
   lme(ampl~gapf*bl,hframe2,random=list(VP=pdDiag(~gapf*bl),bl=pdDiag(~gapf)))
  and some variations of this to get the same F tests generated. At least,
  I got the F-test on error stratum VP:bl this way, but not the other two:
   anova(hf2.lme)
  numDF denDF F-value p-value
  (Intercept) 178  764.86  .0001
  gapf378   17.68  .0001
  bl  1 5   37.81  0.0017
  gapf:bl 3782.99  0.0362
  
  Then I tried to move to lmer.
  I tried to find something equivalent to the above lme call, with no
  success at all.
  
  In case, that the problem is in the data, here is the set:
  
  VP ampl wg bl gapf
  1 WJ 22 w s 144
  2 CR 23 w s 144
  3 MZ 25 w s 144
  4 MP 34 w s 144
  5 HJ 36 w s 144
  6 SJ 26 w s 144
  7 WJ 34 w s 80
  8 CR 31 w s 80
  9 MZ 33 w s 80
  10 MP 36 w s 80
  11 HJ 37 w s 80
  12 SJ 32 w s 80
  13 WJ 34 w s 48
  14 CR 37 w s 48
  15 MZ 38 w s 48
  16 MP 38 w s 48
  17 HJ 40 w s 48
  18 SJ 32 w s 48
  19 WJ 36 w s 16
  20 CR 40 w s 16
  21 MZ 39 w s 16
  22 MP 40 w s 16
  23 HJ 40 w s 16
  24 SJ 38 w s 16
  25 WJ 16 g s 144
  26 CR 28 g s 144
  27 MZ 18 g s 144
  28 MP 33 g s 144
  29 HJ 37 g s 144
  30 SJ 28 g s 144
  31 WJ 28 g s 80
  32 CR 33 g s 80
  33 MZ 24 g s 80
  34 MP 34 g s 80
  35 HJ 36 g s 80
  36 SJ 30 g s 80
  37 WJ 32 g s 48
  38 CR 38 g s 48
  39 MZ 34 g s 48
  40 MP 37 g s 48
  41 HJ 39 g s 48
  42 SJ 30 g s 48
  43 WJ 36 g s 16
  44 CR 34 g s 16
  45 MZ 36 g s 16
  46 MP 40 g s 16
  47 HJ 40 g s 16
  48 SJ 36 g s 16
  49 WJ 22 w b 144
  50 CR 24 w b 144
  51 MZ 20 w b 144
  52 MP 26 w b 144
  53 HJ 22 w b 144
  54 SJ 16 w b 144
  55 WJ 26 w b 80
  56 CR 24 w b 80
  57 MZ 26 w b 80
  58 MP 27 w b 80
  59 HJ 26 w b 80
  60 SJ 18 w b 80
  61 WJ 28 w b 48
  62 CR 23 w b 48
  63 MZ 28 w b 48
  64 MP 29 w b 48
  65 HJ 27 w b 48
  66 SJ 24 w b 48
  67 WJ 32 w b 16
  68 CR 26 w b 16
  69 MZ 30 w b 16
  70 MP 28 w b 16
  71 HJ 30 w b 16
  72 SJ 22 w b 16
  73 WJ 22 g b 144
  74 CR 18 g b 144
  75 MZ 18 g b 144
  76 MP 26 g b 144
  77 HJ 22 g b 144
  78 SJ 18 g b 144
  79 WJ 24 g b 80
  80 CR 26 g b 80
  81 MZ 30 g b 80
  82 MP 26 g b 80
  83 HJ 26 g b 80
  84 SJ 24 g b 80
  85 WJ 28 g b 48
  86 CR 28 g b 48
  87 MZ 27 g b 48
  88 MP 30 g b 48
  89 HJ 26 g b 48
  90 SJ 16 g b 48
  91 WJ 28 g b 16
  92 CR 19 g b 16
  93 MZ 24 g b 16
  94 MP 32 g b 16
  95 HJ 30 g b 16
  96 SJ 22 g b 16
  -- 
  Dipl.-Math. Wilhelm Bernhard Kloke
  Institut fuer Arbeitsphysiologie an der Universitaet Dortmund
  Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257
  
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 

Re: [R] dataframes with only one variable

2006-01-12 Thread Prof Brian Ripley
On Wed, 11 Jan 2006, Erich Neuwirth wrote:

 Subsetting from a dataframe with only one variable
 returns a vector, not a dataframe.
 This seems somewhat inconsistent.

Not at all.  It is entirely consistent with matrix-like indexing (the form 
you used).

 Wouldn't it be better if subsetting would respect
 the structure completely?

It depends how you do it.  [sel1,]  parallels a matrix, and drops 
dimensions unless drop == FALSE is supplied.  [sel1] returns a 
one-column df, and [[sel1]] returns a vector.

It is just a question of choosing the appropriate tool.  And any changes 
to this sort of thing (from the White book) would break a lot of careful 
code.



 v1-1:4
 v2-4:1
 df1-data.frame(v1)
 df2-data.frame(v1,v2)
 sel1-c(TRUE,TRUE,TRUE,TRUE)

 df1[sel1,]
 [1] 1 2 3 4
 df2[sel1,]
  v1 v2
 1  1  4
 2  2  3
 3  3  2
 4  4  1

 -- 
 Erich Neuwirth
 Institute for Scientific Computing and
 Didactic Center for Computer Science
 University of Vienna
 phone: +43-1-4277-39464  fax: +43-1-4277-39459

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Indentation in emacs

2006-01-12 Thread Göran Broström
I'm using emacs-21.4 on debian unstable, together with the latest ESS
implementation. I try to change indentation to 4 by following the
advise in R-exts: It results in the following lines in my .emacs
file:

 (custom-set-variables
  ;; custom-set-variables was added by Custom -- don't edit or cut/paste it!
  ;; Your init file should contain only one such instance.
 '(c-basic-offset 4)
 '(c-default-style bsd)
 '(latin1-display t nil (latin1-disp)))
(custom-set-faces
  ;; custom-set-faces was added by Custom -- don't edit or cut/paste it!
  ;; Your init file should contain only one such instance.
 )

But it doesn't work with R code  (with C code it works). So what is missing?
--
Göran Broström

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Repeated measures aov with post hoc tests?

2006-01-12 Thread Fredrik Karlsson
Dear list,

I posted the message below a cople of days ago, and have not been able
to find any solution to this. I do really want some help.

/Fredrik

On 1/10/06, Fredrik Karlsson [EMAIL PROTECTED] wrote:
 Dear list,

 I would like to perform an analysis on the following model:

 aov(ampratio ~ Type * Place * agemF + Error(speakerid/Place) ,data=aspvotwork)

 using the approach from http://www.psych.upenn.edu/~baron/rpsych/rpsych.html .

 Now, I got the test results, wich indicate a significant interaction
 and main effects of the agemF variable. How do I find at what level of
 agemF the effect may be found.

 How do I do this?

 I found a reference to TukeyHSD in the archives, but I cannot use it:

  TukeyHSD(aov(ampratio ~ Type * Place * agemF + 
  Error(speakerid/Place),data=aspvotwork))
 Error in TukeyHSD(aov(ampratio ~ Type * Place * agemF +
 Error(speakerid/Place),  :
 no applicable method for TukeyHSD

 Please help me.

 /Fredrik



--
My Gentoo + PVR-350 + IVTV + MythTV blog is on
http://gentoomythtv.blogspot.com/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a series of 1's and -1's

2006-01-12 Thread Ingmar Visser
You could try to zip your data file and see whether there is a change in
file size that you feel is significant in which case the series is not
random (-:
To be sure, there is no such thing as a positive test for randomness, only
tests for specific deviations of randomness of which the runs.test and the
entropy are just two examples. Zip programs use a whole bunch of these tests
to compress files by finding structure in the data.
best, ingmar

 From: Roger Bivand [EMAIL PROTECTED]
 Reply-To: [EMAIL PROTECTED]
 Date: Thu, 12 Jan 2006 08:42:25 +0100 (CET)
 To: Mark Leeds [EMAIL PROTECTED]
 Cc: R-Stat Help R-help@stat.math.ethz.ch
 Subject: Re: [R] a series of 1's and -1's
 
 On Wed, 11 Jan 2006, Mark Leeds wrote:
 
 Does anyone know of a simple test
 in any R package that given
 a series of negative ones and positive
 ones ( no other values are possible in the series )
 returns a test of whether the series is random or not.
 ( a test at each point would be good but
 I can use the apply function to implement
 that ) ?
 
 help.search(runs) points to function runs.test() in package tseries,
 with examples:
 
 x - factor(sign(rnorm(100))) # randomness
 runs.test(x)
 x - factor(rep(c(-1, 1), 50)) # over-mixing
 runs.test(x)
 
 which looks like your case
 
  
thanks.
 
 
 
 **
 This email and any files transmitted with it are confidentia...{{dropped}}
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 
 
 -- 
 Roger Bivand
 Economic Geography Section, Department of Economics, Norwegian School of
 Economics and Business Administration, Helleveien 30, N-5045 Bergen,
 Norway. voice: +47 55 95 93 55; fax +47 55 95 95 43
 e-mail: [EMAIL PROTECTED]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Equal number of cuts in a contourplot with trellis

2006-01-12 Thread Jesus Frias
Dear R-helpers,

I need some help to produce a set of contour plots that I am
trying to make in order to compare surfaces between the levels of a
factor. For example: 

library(lattice)
g - expand.grid(x = 60:100, y = 1:25, ti = c(a,b,c))
g$z -with(g,
  (-1e-4*x-1e-3*y-1e-5*x*y)*(ti==a) +
  (1e-2*x-1e-3*y-1e-4*x*y)*(ti==b) +
  (1e-3*x-1e-3*y-1.e-5*x*y)*(ti==c)
   )

contourplot(z~ x * y|ti, data = g,
cuts=20,
pretty=T,
screen = list(z = 30, x = -60))

As you can see in the figure, most of the contour lines are in one of
the levels and we are not able to see how the other levels look like.

I would like to display the same number of cuts in each of the trellis.
I can make each of the contourplots separately and control the number of
cuts but I am not able to plot all of them in one. 

Thanks in advance,

Jesus

Jesús María Frías Celayeta
School of Food Science and Environmental Health
Dublin Institute of Technology
Cathal Brugha St. Dublin 1
p + 353 1 4024459
f + 353 1 4024495
w http://www.dit.ie/DIT/tourismfood/science/



-- 
This message has been scanned for content and 
viruses by the DIT Information Services MailScanner 
Service, and is believed to be clean.
http://www.dit.ie

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Software Reliability Using Cox Proportional Hazard

2006-01-12 Thread voodooochild
Hello,

i want to use coxph() for software reliability analysis, i use the 
following implementation

###
failure-read.table(failure.dat, header=T)
attach(failure)

f-FailureNumber
t-TimeBetweenFailure

# filling vector f with ones

for(i in 1:length(f)) {
  f[i]-1
}

library(survival)

# cox proportional hazard with covariable LOC

cox.res-coxph(Surv(t,f)~LOC,data=failure)
plot(survfit(cox.res))
##


failure.dat is
FailureNumber TimeBetweenFailureLOC
17120
211135
38141
410150
515162
622169
720170
825181
928188
1035193

here TimeBetweenFailure gives the time between following failures and 
LOC is the actuall Lines of Code.
What i do, is filling the vector f with ones. f is then in Surv the 
censoring variable. in survival analysis the censoring variable is 1 if 
the patient dies, now i interpret the event death as
the appearence of a failure. And i use LOC as covariable.

My question now is, is this approach right or is there any big mistake 
or wrong interpretation?


Thanks a Lot!
best regards
Andreas

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] a series of 1's and -1's

2006-01-12 Thread Johannes Hüsing
 You could try to zip your data file and see whether there is a change in
 file size that you feel is significant in which case the series is not
 random (-:

... after converting the -1s and 1s to bits, of course.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Obtaining the adjusted r-square given the regression

2006-01-12 Thread Keith Chamberlain
Hi people,
 I want to obtain the adjusted r-square given a set of coefficients (without
the intercept), and I don't know if there is a function that does it.
Exist

Dear Alexandra,

Without knowing what routine you were using that returned an R-Square value
too you, it is a little difficult to tell. I am not able to comment on your
routine, but I'll try to give a satisfactory response to your first
question.

Assuming that you used lm() for your linear regression, then all you need to
do is see the help page for the lm() summary method, which explains how to
get at the values in the summary. In your console type the following:

?summary.lm

That help page describes how to get at the values in the summary, and
provides an example with getting the full correlation value (whereas the one
printed is in a 'pretty' form). I assume the other ones can be accessed
similarly, including the adjusted R^2. Your code might look something like
the following:

# Where object is the result of a call to lm()
summary(object)$adj.r.squared

The help also has a demonstration for how to exclude the intercept, where
you simply add a -1 term too the end of your formula. 

There may still be an accessor function available, which I do not know about
(BUT am certainly interested in if anyone has feedback). 

In the future, it would help people on the list decode your function when it
too is printed in a readable form. The layout of your function in your
message may have become garbled when it was stripped of all of its HTML
information. If the code was formatted in a readable way when you sent the
message, then that's what happened. If that is the case try composing your
message in plain text to begin with. 

I hope this helps.

Rgds,
KeithC.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Space between axis label and tick labels

2006-01-12 Thread Pikounis, Bill [CNTUS]
Hello Kare:
I think some global graphics settings can get you what you need, assuming
you are using base graphics functions such as plot(). See ?par for
comprehensive details. For the specific characteristics you mention, I have
found the arguments in par for:

 1) font size of axis and tick labels
 
 2) font thickness
 

cex.axis  cex.lab are helpful; and perhaps the font.* series

 3) placement of both axis and tick labels

mgp  tck are helpful.

Hope that helps,
Bill

---
Bill Pikounis, PhD
Nonclinical Statistics
Centocor, Inc.
Malvern, PA 19355 (USA)


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] Behalf Of Kare Edvardsen
 Sent: Wednesday, January 11, 2006 4:52 AM
 To: R-help
 Subject: [R] Space between axis label and tick labels
 
 
 I'm writing an publication in two column format and need to 
 shrink some 
 plots. After increasing the axis labels it does not look nice at all. 
 The y-axis label and tick labels almost touch each other and 
 the x-axis 
 tick labels expand into the plot instead of away from it. Is there a 
 better way than cex to control the:
 
 1) font size of axis and tick labels
 
 2) font thickness
 
 3) placement of both axis and yick labels
 
 
 Cheers,
 
 Kare
 
 -- 
 ###
 Kare Edvardsen [EMAIL PROTECTED]
 Norwegian Institute for Air Research (NILU)
 Polarmiljosenteret
 NO-9296 Tromso   http://www.nilu.no
 Swb. +47 77 75 03 75 Dir. +47 77 75 03 90
 Fax. +47 77 75 03 76 Mob. +47 90 74 60 69
 ###
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Problem with NLSYSTEMFIT()

2006-01-12 Thread Kilian Plank
Hello,

 

I want to solve a nonlinear 3SLS problem with nlsystemfit(). The
equations

are of the form

y_it = f_i(x,t,theta)

The functions f_i(.) have to be formulated as R-functions. When invoking

nlsystemfit() I get the error

 

Error in deriv.formula(eqns[[i]], names(parmnames)) : 

Function 'f1' is not in the derivatives table

 

Isn't it possible to provide equations in the form

eq1 ~ f1(x,t,theta) etc. to nlsystemfit() ?

 

Kind regards,

 

Kilian Plank


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] envelopes of simulations

2006-01-12 Thread Sara Mouro
Hello!

I am writing you because I could not plot the confidence envelopes for
functions Jest, Jcross, Jdot, Jmulti, and L, using the Spatstat package.

I have already understood how to do that for Kest or Jest, that is:
JEnv - plot(envelope(PPPData, Jest))
Where PPPData is my ppp object.


However, for Jcross I must specify the two marks I want to analyse.
That is, usually I would get the Jcross doing:
Jc - Jcross(PPPData,Aun,Qsu)
For marks Aun and Qsu.


For L function, I can make:
K - Kest (PPPData, correction=isotropic)
plot (K, r-sqrt(iso/pi)~r)


But I could not understand how to put all that together. 

Resuming, I do not understand how can I use the envelope function:
-   to plot Jcross (where and how do I specify types/marks ??)
-   to plot L (where do I write that I do not want K, but
r-sqrt(iso/pi)~r, instead??)
-   etc. for similar functions not so simple as Jest, Kest or Gest. 

Hope someone can please help me on this.
I have already looked at the package's manual, but I could not find any
example similar to what I am trying to do.


Best Regards,
Sara Mouro.






[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] SPSS and R ? do they like each other?

2006-01-12 Thread Michael Reinecke
Thank you very much! write.SPSS works fine.

I just wonder, why this very useful function is not part of any package. I have 
not even found it in the web. For experts it may not be a big deal to write 
their own export functions, but for newcomers like me it is almost impossible - 
and at the same time it is essential to us to have good facilities for exchange 
with our familiar statistical package. I think a lack of exchange tools might 
be something that scares many people off and keeps them from getting to know R.

Well, just do give a greenhorn ´s perspective.

Best regards,

Michael


-Ursprüngliche Nachricht-
Von: Chuck Cleland [mailto:[EMAIL PROTECTED] 
Gesendet: Donnerstag, 12. Januar 2006 01:16
An: Michael Reinecke
Cc: R-help@stat.math.ethz.ch
Betreff: Re: [R] SPSS and R ? do they like each other?

Michael Reinecke wrote:
  
 Thanks again for your answer! I tried it out. write.foreign produces SPSS 
 syntax, but unfortunally this syntax tells SPSS to take the names (and not 
 the labels) in order to produce SPSS variable labels. The former labels get 
 lost.
 
 I tried a data frame produced by read.spss and one by spss.get. Here is the 
 read.spss one (the labels meant to be exported are called Text 1, ...):
 
 jjread-  read.spss(test2.sav, use.value.labels=TRUE, 
 to.data.frame=TRUE)
 
str(jjread)
 
 `data.frame':   30 obs. of  3 variables:
  $ VAR1: num  101 102 103 104 105 106 107 108 109 110 ...
  $ VAR2: num  6 6 5 6 6 6 6 6 6 6 ...
  $ VAR3: num  0 0 6 7 0 7 0 0 0 8 ...
  - attr(*, variable.labels)= Named chr  Text 1 Text2 text 3
   ..- attr(*, names)= chr  VAR1 VAR2 VAR3
 
 datafile-tempfile()
 codefile-tempfile()
 write.foreign(jjread,datafile,codefile,package=SPSS)
 file.show(datafile)
 file.show(codefile)
 
 
 
 The syntax file I get is:
 
 DATA LIST FILE= 
 C:\DOKUME~1\reinecke\LOKALE~1\Temp\Rtmp15028\file27910  free / VAR1 
 VAR2 VAR3  .
 
 VARIABLE LABELS
 VAR1 VAR1 
  VAR2 VAR2 
  VAR3 VAR3 
  .
 
 EXECUTE.
 
 
 I am working on R 2.2.0. But I think a newer version won ´t fix it either, 
 will it?

Here is a functiong based on modifying foreign:::writeForeignSPSS (by Thomas 
Lumley) which might work for you:

write.SPSS - function (df, datafile, codefile, varnames = NULL) { adQuote - 
function(x){paste(\, x, \, sep = )}
 dfn - lapply(df, function(x) if (is.factor(x))
 as.numeric(x)
 else x)
 write.table(dfn, file = datafile, row = FALSE, col = FALSE)
 if(is.null(attributes(df)$variable.labels)) varlabels - names(df) else 
varlabels - attributes(df)$variable.labels
 if (is.null(varnames)) {
 varnames - abbreviate(names(df), 8)
 if (any(sapply(varnames, nchar)  8))
 stop(I cannot abbreviate the variable names to eight or fewer 
letters)
 if (any(varnames != names(df)))
 warning(some variable names were abbreviated)
 }
 cat(DATA LIST FILE=, dQuote(datafile),  free\n, file = codefile)
 cat(/, varnames,  .\n\n, file = codefile, append = TRUE)
 cat(VARIABLE LABELS\n, file = codefile, append = TRUE)
 cat(paste(varnames, adQuote(varlabels), \n), .\n, file = codefile,
 append = TRUE)
 factors - sapply(df, is.factor)
 if (any(factors)) {
 cat(\nVALUE LABELS\n, file = codefile, append = TRUE)
 for (v in which(factors)) {
 cat(/\n, file = codefile, append = TRUE)
 cat(varnames[v],  \n, file = codefile, append = TRUE)
 levs - levels(df[[v]])
 cat(paste(1:length(levs), adQuote(levs), \n, sep =  ),
 file = codefile, append = TRUE)
 }
 cat(.\n, file = codefile, append = TRUE)
 }
 cat(\nEXECUTE.\n, file = codefile, append = TRUE) }

--
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 452-1424 (M, W, F)
fax: (917) 438-0894

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] extract variables from linear model

2006-01-12 Thread Jörg Schaber
Hi,

I fitted a linear model:
fit - lm(y ~ a * b + c - 1 , na.action='na.omit')

Now I want to extract only the a * b effects with confidence intervals. 
Of course, I can just add the coefficients by hand, but I think there 
should an easier way.
I tried with predict.lm using the 'terms' argument, but I didn't manage 
to do it.
Any hints are appreciated,

best,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] SPSS and R ? do they like each other?

2006-01-12 Thread Chuck Cleland
Michael Reinecke wrote:
 Thank you very much! write.SPSS works fine.
 
 I just wonder, why this very useful function is not part of any package. I 
 have not even found it in the web. For experts it may not be a big deal to 
 write their own export functions, but for newcomers like me it is almost 
 impossible - and at the same time it is essential to us to have good 
 facilities for exchange with our familiar statistical package. I think a lack 
 of exchange tools might be something that scares many people off and keeps 
 them from getting to know R.
 
 Well, just do give a greenhorn ´s perspective.
 ...

   The tool for exporting to SPSS *is* available in the foreign package 
thanks to Thomas Lumley.  I just made a *small modification* to use the 
variable.labels attribute of a data frame if it's available and the 
names of the data frame if that attribute is not available.  Maybe 
Thomas will consider making a change to foreign:::writeForeignSPSS along 
those lines.

Chuck Cleland

 -Ursprüngliche Nachricht-
 Von: Chuck Cleland [mailto:[EMAIL PROTECTED] 
 Gesendet: Donnerstag, 12. Januar 2006 01:16
 An: Michael Reinecke
 Cc: R-help@stat.math.ethz.ch
 Betreff: Re: [R] SPSS and R ? do they like each other?
 
 Michael Reinecke wrote:
 
 
Thanks again for your answer! I tried it out. write.foreign produces SPSS 
syntax, but unfortunally this syntax tells SPSS to take the names (and not 
the labels) in order to produce SPSS variable labels. The former labels get 
lost.

I tried a data frame produced by read.spss and one by spss.get. Here is the 
read.spss one (the labels meant to be exported are called Text 1, ...):

jjread-  read.spss(test2.sav, use.value.labels=TRUE, 
to.data.frame=TRUE)


str(jjread)

`data.frame':   30 obs. of  3 variables:
 $ VAR1: num  101 102 103 104 105 106 107 108 109 110 ...
 $ VAR2: num  6 6 5 6 6 6 6 6 6 6 ...
 $ VAR3: num  0 0 6 7 0 7 0 0 0 8 ...
 - attr(*, variable.labels)= Named chr  Text 1 Text2 text 3
  ..- attr(*, names)= chr  VAR1 VAR2 VAR3


datafile-tempfile()
codefile-tempfile()
write.foreign(jjread,datafile,codefile,package=SPSS)
file.show(datafile)
file.show(codefile)



The syntax file I get is:

DATA LIST FILE= 
C:\DOKUME~1\reinecke\LOKALE~1\Temp\Rtmp15028\file27910  free / VAR1 
VAR2 VAR3  .

VARIABLE LABELS
VAR1 VAR1 
 VAR2 VAR2 
 VAR3 VAR3 
 .

EXECUTE.


I am working on R 2.2.0. But I think a newer version won ´t fix it either, 
will it?
 
 
 Here is a functiong based on modifying foreign:::writeForeignSPSS (by Thomas 
 Lumley) which might work for you:
 
 write.SPSS - function (df, datafile, codefile, varnames = NULL) { adQuote - 
 function(x){paste(\, x, \, sep = )}
  dfn - lapply(df, function(x) if (is.factor(x))
  as.numeric(x)
  else x)
  write.table(dfn, file = datafile, row = FALSE, col = FALSE)
  if(is.null(attributes(df)$variable.labels)) varlabels - names(df) else 
 varlabels - attributes(df)$variable.labels
  if (is.null(varnames)) {
  varnames - abbreviate(names(df), 8)
  if (any(sapply(varnames, nchar)  8))
  stop(I cannot abbreviate the variable names to eight or fewer 
 letters)
  if (any(varnames != names(df)))
  warning(some variable names were abbreviated)
  }
  cat(DATA LIST FILE=, dQuote(datafile),  free\n, file = codefile)
  cat(/, varnames,  .\n\n, file = codefile, append = TRUE)
  cat(VARIABLE LABELS\n, file = codefile, append = TRUE)
  cat(paste(varnames, adQuote(varlabels), \n), .\n, file = codefile,
  append = TRUE)
  factors - sapply(df, is.factor)
  if (any(factors)) {
  cat(\nVALUE LABELS\n, file = codefile, append = TRUE)
  for (v in which(factors)) {
  cat(/\n, file = codefile, append = TRUE)
  cat(varnames[v],  \n, file = codefile, append = TRUE)
  levs - levels(df[[v]])
  cat(paste(1:length(levs), adQuote(levs), \n, sep =  ),
  file = codefile, append = TRUE)
  }
  cat(.\n, file = codefile, append = TRUE)
  }
  cat(\nEXECUTE.\n, file = codefile, append = TRUE) }
 
 --
 Chuck Cleland, Ph.D.
 NDRI, Inc.
 71 West 23rd Street, 8th floor
 New York, NY 10010
 tel: (212) 845-4495 (Tu, Th)
 tel: (732) 452-1424 (M, W, F)
 fax: (917) 438-0894
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
Chuck Cleland, Ph.D.
NDRI, Inc.
71 West 23rd Street, 8th floor
New York, NY 10010
tel: (212) 845-4495 (Tu, Th)
tel: (732) 452-1424 (M, W, F)
fax: (917) 438-0894

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] dataframes with only one variable

2006-01-12 Thread Richard M. Heiberger
 df1
  v1
1  1
2  2
3  3
4  4
 df1[,]
[1] 1 2 3 4
 df1[,1]
[1] 1 2 3 4
 df1[,,drop=F]
  v1
1  1
2  2
3  3
4  4
 df1[,1,drop=F]
  v1
1  1
2  2
3  3
4  4
 df1[1]
  v1
1  1
2  2
3  3
4  4
 df1[[1]]
[1] 1 2 3 4
 


For transfers from Excel to R using the [put/get] R dataframe commands,
I think it is important always to use the drop=FALSE argument
(as I assume you are doing in RExcel V1.55).  The reason
for this is to maintain a rigid relationship between the only partially
compatible conventions of Excel and R.

For strictly within R use, the case is less clear.  I have trained myself 
always (well 85% on the first try) to use the drop=FALSE argument when I care
about the structure after the copy.

The tension between keeping the structure and demoting the structure predates
data.frames.  This was a design issue in matrices as well.
 tmp - matrix(1:6,2,3)
 tmp
 [,1] [,2] [,3]
[1,]135
[2,]246
 tmp[1,]
[1] 1 3 5
 tmp[1,,drop=FALSE]
 [,1] [,2] [,3]
[1,]135
 

Rich

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] SPSS and R ? do they like each other?

2006-01-12 Thread ronggui
2006/1/12, Chuck Cleland [EMAIL PROTECTED]:
 Michael Reinecke wrote:
  Thank you very much! write.SPSS works fine.
 
  I just wonder, why this very useful function is not part of any package. I 
  have not even found it in the web. For experts it may not be a big deal to 
  write their own export functions, but for newcomers like me it is almost 
  impossible - and at the same time it is essential to us to have good 
  facilities for exchange with our familiar statistical package. I think a 
  lack of exchange tools might be something that scares many people off and 
  keeps them from getting to know R.
 
  Well, just do give a greenhorn ´s perspective.
  ...

The tool for exporting to SPSS *is* available in the foreign package
 thanks to Thomas Lumley.  I just made a *small modification* to use the
 variable.labels attribute of a data frame if it's available and the
 names of the data frame if that attribute is not available.  Maybe
 Thomas will consider making a change to foreign:::writeForeignSPSS along
 those lines.

I agree with this point. it 's usefull when one get the spss data file
into R to do something and export that data back to spss data file.


 Chuck Cleland

  -Ursprüngliche Nachricht-
  Von: Chuck Cleland [mailto:[EMAIL PROTECTED]
  Gesendet: Donnerstag, 12. Januar 2006 01:16
  An: Michael Reinecke
  Cc: R-help@stat.math.ethz.ch
  Betreff: Re: [R] SPSS and R ? do they like each other?
 
  Michael Reinecke wrote:
 
 
 Thanks again for your answer! I tried it out. write.foreign produces SPSS 
 syntax, but unfortunally this syntax tells SPSS to take the names (and not 
 the labels) in order to produce SPSS variable labels. The former labels get 
 lost.
 
 I tried a data frame produced by read.spss and one by spss.get. Here is the 
 read.spss one (the labels meant to be exported are called Text 1, ...):
 
 jjread-  read.spss(test2.sav, use.value.labels=TRUE,
 to.data.frame=TRUE)
 
 
 str(jjread)
 
 `data.frame':   30 obs. of  3 variables:
  $ VAR1: num  101 102 103 104 105 106 107 108 109 110 ...
  $ VAR2: num  6 6 5 6 6 6 6 6 6 6 ...
  $ VAR3: num  0 0 6 7 0 7 0 0 0 8 ...
  - attr(*, variable.labels)= Named chr  Text 1 Text2 text 3
   ..- attr(*, names)= chr  VAR1 VAR2 VAR3
 
 
 datafile-tempfile()
 codefile-tempfile()
 write.foreign(jjread,datafile,codefile,package=SPSS)
 file.show(datafile)
 file.show(codefile)
 
 
 
 The syntax file I get is:
 
 DATA LIST FILE=
 C:\DOKUME~1\reinecke\LOKALE~1\Temp\Rtmp15028\file27910  free / VAR1 
 VAR2 VAR3  .
 
 VARIABLE LABELS
 VAR1 VAR1
  VAR2 VAR2
  VAR3 VAR3
  .
 
 EXECUTE.
 
 
 I am working on R 2.2.0. But I think a newer version won ´t fix it either, 
 will it?
 
 
  Here is a functiong based on modifying foreign:::writeForeignSPSS (by 
  Thomas Lumley) which might work for you:
 
  write.SPSS - function (df, datafile, codefile, varnames = NULL) { adQuote 
  - function(x){paste(\, x, \, sep = )}
   dfn - lapply(df, function(x) if (is.factor(x))
   as.numeric(x)
   else x)
   write.table(dfn, file = datafile, row = FALSE, col = FALSE)
   if(is.null(attributes(df)$variable.labels)) varlabels - names(df) 
  else varlabels - attributes(df)$variable.labels
   if (is.null(varnames)) {
   varnames - abbreviate(names(df), 8)
   if (any(sapply(varnames, nchar)  8))
   stop(I cannot abbreviate the variable names to eight or fewer 
  letters)
   if (any(varnames != names(df)))
   warning(some variable names were abbreviated)
   }
   cat(DATA LIST FILE=, dQuote(datafile),  free\n, file = codefile)
   cat(/, varnames,  .\n\n, file = codefile, append = TRUE)
   cat(VARIABLE LABELS\n, file = codefile, append = TRUE)
   cat(paste(varnames, adQuote(varlabels), \n), .\n, file = codefile,
   append = TRUE)
   factors - sapply(df, is.factor)
   if (any(factors)) {
   cat(\nVALUE LABELS\n, file = codefile, append = TRUE)
   for (v in which(factors)) {
   cat(/\n, file = codefile, append = TRUE)
   cat(varnames[v],  \n, file = codefile, append = TRUE)
   levs - levels(df[[v]])
   cat(paste(1:length(levs), adQuote(levs), \n, sep =  ),
   file = codefile, append = TRUE)
   }
   cat(.\n, file = codefile, append = TRUE)
   }
   cat(\nEXECUTE.\n, file = codefile, append = TRUE) }
 
  --
  Chuck Cleland, Ph.D.
  NDRI, Inc.
  71 West 23rd Street, 8th floor
  New York, NY 10010
  tel: (212) 845-4495 (Tu, Th)
  tel: (732) 452-1424 (M, W, F)
  fax: (917) 438-0894
 
  __
  R-help@stat.math.ethz.ch mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide! 
  http://www.R-project.org/posting-guide.html
 

 --
 Chuck Cleland, Ph.D.
 NDRI, Inc.
 71 West 23rd Street, 8th floor
 New York, NY 10010
 tel: 

Re: [R] Indentation in emacs

2006-01-12 Thread Seth Falcon
On 12 Jan 2006, [EMAIL PROTECTED] wrote:

 I'm using emacs-21.4 on debian unstable, together with the latest
 ESS implementation. I try to change indentation to 4 by following
 the advise in R-exts: It results in the following lines in my
 .emacs file:

 (custom-set-variables
 ;; custom-set-variables was added by Custom -- don't edit or
 ;; cut/paste it!  Your init file should contain only one such
 ;; instance.
 '(c-basic-offset 4)
 '(c-default-style bsd)
 '(latin1-display t nil (latin1-disp)))
 (custom-set-faces
 ;; custom-set-faces was added by Custom -- don't edit or cut/paste it!
 ;; Your init file should contain only one such instance.
 )

 But it doesn't work with R code (with C code it works). So what is
 missing?

Try:
(setq ess-indent-level 4)

You may also be interested in the ESS mail list (a better place for
such questions): 
 [EMAIL PROTECTED] mailing list
 https://stat.ethz.ch/mailman/listinfo/ess-help


+ seth

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] tapply and weighted means

2006-01-12 Thread Florent Bresson
I' m trying to compute weighted mean on different
groups but it only returns NA. If I use the following
data.frame truc:

x  y  w
1  1  1
1  2  2
1  3  1
1  4  2
0  2  1
0  3  2
0  4  1
0  5  1

where x is a factor, and then use the command :

tapply(truc$y,list(truc$x),wtd.mean, weights=truc$w)

I just get NA. What's the problem ? What can I do ?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] tapply and weighted means

2006-01-12 Thread Dimitris Rizopoulos
you need also to split the 'w' column, for each level of 'x'; you 
could use:

lapply(split(truc, truc$x), function(z) weighted.mean(z$y, z$w))


I hope it helps.

Best,
Dimitris


Dimitris Rizopoulos
Ph.D. Student
Biostatistical Centre
School of Public Health
Catholic University of Leuven

Address: Kapucijnenvoer 35, Leuven, Belgium
Tel: +32/(0)16/336899
Fax: +32/(0)16/337015
Web: http://www.med.kuleuven.be/biostat/
 http://www.student.kuleuven.be/~m0390867/dimitris.htm



- Original Message - 
From: Florent Bresson [EMAIL PROTECTED]
To: R-help r-help@stat.math.ethz.ch
Sent: Thursday, January 12, 2006 3:44 PM
Subject: [R] tapply and weighted means


 I' m trying to compute weighted mean on different
 groups but it only returns NA. If I use the following
 data.frame truc:

 x  y  w
 1  1  1
 1  2  2
 1  3  1
 1  4  2
 0  2  1
 0  3  2
 0  4  1
 0  5  1

 where x is a factor, and then use the command :

 tapply(truc$y,list(truc$x),wtd.mean, weights=truc$w)

 I just get NA. What's the problem ? What can I do ?

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Problem with NLSYSTEMFIT()

2006-01-12 Thread Arne Henningsen
Dear Kilian,

On Thursday 12 January 2006 14:20, Kilian Plank wrote:
 Hello,



 I want to solve a nonlinear 3SLS problem with nlsystemfit(). The
 equations

 are of the form

 y_it = f_i(x,t,theta)

 The functions f_i(.) have to be formulated as R-functions. When invoking

 nlsystemfit() I get the error



 Error in deriv.formula(eqns[[i]], names(parmnames)) :

 Function 'f1' is not in the derivatives table



 Isn't it possible to provide equations in the form

 eq1 ~ f1(x,t,theta) etc. to nlsystemfit() ?

Unfortunately, this is not (yet) possible. You have to specify your equations 
like eq1 - y ~ b0 + x1^b1 + b1^2 * log( x2) (see the documentation of 
nlsystemfit). Furthermore, I suggest that you ask the author of nlsystemfit 
(Jeff Hamann, see cc) how reliable the algorithms for the non-linear 
estimation are in the current version of systemfit (while the code for the 
linear estimation is very mature now, the non-linear estimation is still 
under development). You are invited to help us improving the code of 
nlsystemfit ;-)

Best wishes,
Arne

 Kind regards,



 Kilian Plank


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

-- 
Arne Henningsen
Department of Agricultural Economics
University of Kiel
Olshausenstr. 40
D-24098 Kiel (Germany)
Tel: +49-431-880 4445
Fax: +49-431-880 1397
[EMAIL PROTECTED]
http://www.uni-kiel.de/agrarpol/ahenningsen/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] tapply and weighted means

2006-01-12 Thread Frank E Harrell Jr
Dimitris Rizopoulos wrote:
 you need also to split the 'w' column, for each level of 'x'; you 
 could use:
 
 lapply(split(truc, truc$x), function(z) weighted.mean(z$y, z$w))
 
 
 I hope it helps.
 
 Best,
 Dimitris

Or:
library(Hmisc)
?wtd.mean
The help file has a built-in example of this.
Frank

 
 
 Dimitris Rizopoulos
 Ph.D. Student
 Biostatistical Centre
 School of Public Health
 Catholic University of Leuven
 
 Address: Kapucijnenvoer 35, Leuven, Belgium
 Tel: +32/(0)16/336899
 Fax: +32/(0)16/337015
 Web: http://www.med.kuleuven.be/biostat/
  http://www.student.kuleuven.be/~m0390867/dimitris.htm
 
 
 
 - Original Message - 
 From: Florent Bresson [EMAIL PROTECTED]
 To: R-help r-help@stat.math.ethz.ch
 Sent: Thursday, January 12, 2006 3:44 PM
 Subject: [R] tapply and weighted means
 
 
 
I' m trying to compute weighted mean on different
groups but it only returns NA. If I use the following
data.frame truc:

x  y  w
1  1  1
1  2  2
1  3  1
1  4  2
0  2  1
0  3  2
0  4  1
0  5  1

where x is a factor, and then use the command :

tapply(truc$y,list(truc$x),wtd.mean, weights=truc$w)

I just get NA. What's the problem ? What can I do ?

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

 
 
 
 Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 


-- 
Frank E Harrell Jr   Professor and Chair   School of Medicine
  Department of Biostatistics   Vanderbilt University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] wilcox.test warnig message p-value where are the zeros in the data?

2006-01-12 Thread Knut Krueger
does anybody know why there are the two warnings in the example above?

Regards Knut

  day_4
 [1] 540   1   1   1   1   1   1 300 720 480
  day_1
 [1]  438  3431  4751  562  500  435 1045  890

 is.vector (day_1)
[1] TRUE
 is.vector (day_4)

[1] TRUE

  wilcox.test(day_4 
,day_1,paired=TRUE,alternative=two.sided,exact=TRUE,conf.int=TRUE)

Wilcoxon signed rank test with continuity correction

data:  day_4 and day_1
V = 1, p-value = 0.02086
alternative hypothesis: true mu is not equal to 0
95 percent confidence interval:
 -486.5 -120.0
sample estimates:
(pseudo)median
  -348

Warning messages:
1: cannot compute exact p-value with zeroes in: 
wilcox.test.default(day_4, day_1, paired = TRUE, alternative = 
two.sided, 
2: cannot compute exact confidence interval with zeroes in: 
wilcox.test.default(day_4, day_1, paired = TRUE, alternative = 
two.sided,

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] extract variables from linear model

2006-01-12 Thread jruohonen
 I fitted a linear model:
 fit - lm(y ~ a * b + c - 1 , na.action='na.omit')

wouldn't a simple 

coef(fit)[2] 

work?

Jukka Ruohonen.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] wilcox.test warnig message p-value where are the zeros in the data?

2006-01-12 Thread Peter Dalgaard
Knut Krueger [EMAIL PROTECTED] writes:

 does anybody know why there are the two warnings in the example above?
 
 Regards Knut
 
   day_4
  [1] 540   1   1   1   1   1   1 300 720 480
   day_1
  [1]  438  3431  4751  562  500  435 1045  890
 
  is.vector (day_1)
 [1] TRUE
  is.vector (day_4)
 
 [1] TRUE

The paired Wilcoxon test depends on pairwise differences and as far as
I can see you have two of those being zero, in cases 3 and 5.
 
   wilcox.test(day_4 
 ,day_1,paired=TRUE,alternative=two.sided,exact=TRUE,conf.int=TRUE)
 
 Wilcoxon signed rank test with continuity correction
 
 data:  day_4 and day_1
 V = 1, p-value = 0.02086
 alternative hypothesis: true mu is not equal to 0
 95 percent confidence interval:
  -486.5 -120.0
 sample estimates:
 (pseudo)median
   -348
 
 Warning messages:
 1: cannot compute exact p-value with zeroes in: 
 wilcox.test.default(day_4, day_1, paired = TRUE, alternative = 
 two.sided, 
 2: cannot compute exact confidence interval with zeroes in: 
 wilcox.test.default(day_4, day_1, paired = TRUE, alternative = 
 two.sided,
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] tapply and weighted means

2006-01-12 Thread Gavin Simpson
On Thu, 2006-01-12 at 15:44 +0100, Florent Bresson wrote:
 I' m trying to compute weighted mean on different
 groups but it only returns NA. If I use the following
 data.frame truc:
 
 x  y  w
 1  1  1
 1  2  2
 1  3  1
 1  4  2
 0  2  1
 0  3  2
 0  4  1
 0  5  1
 
 where x is a factor, and then use the command :
 
 tapply(truc$y,list(truc$x),wtd.mean, weights=truc$w)
 
 I just get NA. What's the problem ? What can I do ?

Florent,

I guess you didn't read the help for tapply, which in the Value section
states:

 Note that optional arguments to 'FUN' supplied by the '...'
 argument are not divided into cells.  It is therefore
 inappropriate for 'FUN' to expect additional arguments with the
 same length as 'X'.

So tapply is not the right tool for this job. We can use by() instead (a
wrapper for tapply) as so:

dat - matrix(scan(), byrow = TRUE, ncol = 3)
1  1  1
1  2  2
1  3  1
1  4  2
0  2  1
0  3  2
0  4  1
0  5  1

colnames(dat) - c(x, y, w)
dat - as.data.frame(dat)
dat
(res - by(dat, dat$x, function(z) weighted.mean(z$y, z$w)))

but if you want to easily access the numbers you need to do a little
work, e.g.

as.vector(res)

Also, I don't see a function wtd.mean in standard R and weighted.mean()
doesn't have a weights argument, so I guess you are using a function
from another package and did not tell us.

HTH,

Gav
-- 
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%
Gavin Simpson [T] +44 (0)20 7679 5522
ENSIS Research Fellow [F] +44 (0)20 7679 7565
ENSIS Ltd.  ECRC [E] gavin.simpsonATNOSPAMucl.ac.uk
UCL Department of Geography   [W] http://www.ucl.ac.uk/~ucfagls/cv/
26 Bedford Way[W] http://www.ucl.ac.uk/~ucfagls/
London.  WC1H 0AP.
%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%~%

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] I think simple R question

2006-01-12 Thread Mark Leeds
I have a vector x with #'s ( 1 or -1 in them ) in it and I want to 
mark a new vector with the sign of the value of the a streak
of H where H = some number ( at the next spot in the vector )
 
So, say H was equal to 3 and
I had a vector of
 
[1]  [2]  [3]  [4]  [5]   [6]  [7]  [8]  [9]  [10]
 
1   -11 11   -11 1  -1-1
 
then, I would want a function to return a new
vector of
 
 
[1]  [2]  [3]  [4]  [5]   [6]  [7]  [8]  [9]  [10]
 
0 000 0   1 0 0   0 0
 
As I said, I used to do these things like this
it's been a while and I'm rusty with this stuff.
 
Without looping is preferred but looping is okay
also.
 
   Mark
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 


**
This email and any files transmitted with it are confidentia...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] I think simple R question

2006-01-12 Thread roger koenker
see ?rle


url:www.econ.uiuc.edu/~rogerRoger Koenker
email[EMAIL PROTECTED]Department of Economics
vox: 217-333-4558University of Illinois
fax:   217-244-6678Champaign, IL 61820


On Jan 12, 2006, at 9:56 AM, Mark Leeds wrote:

 I have a vector x with #'s ( 1 or -1 in them ) in it and I want to
 mark a new vector with the sign of the value of the a streak
 of H where H = some number ( at the next spot in the vector )

 So, say H was equal to 3 and
 I had a vector of

 [1]  [2]  [3]  [4]  [5]   [6]  [7]  [8]  [9]  [10]

 1   -11 11   -11 1  -1-1

 then, I would want a function to return a new
 vector of


 [1]  [2]  [3]  [4]  [5]   [6]  [7]  [8]  [9]  [10]

 0 000 0   1 0 0   0 0

 As I said, I used to do these things like this
 it's been a while and I'm rusty with this stuff.

 Without looping is preferred but looping is okay
 also.

Mark





















 **
 This email and any files transmitted with it are confidentia... 
 {{dropped}}

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting- 
 guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] infinite recursion in do.call when lme4 loaded only

2006-01-12 Thread Dieter Menne
A larg program which worked with lme4/R about a year ago failed when I
re-run it today. I reproduced the problem with the program below.

-- When lme4 is not loaded, the program runs ok and fast enough
-- When lme4 is loaded (but never used), the do.call fails
   with infinite recursion after 60 seconds. Memory used increases
   beyond bonds in task manager.
-- I tested a few S3 based packages (MASS, nlme) and did not get
   similar problems

Current workaround: do lme4-processing in a separate program.


--
#library(lme4) # uncomment this to see the problem
np - 12
nq - 20
nph - 3
nrep - 30
grd - expand.grid(Pat=as.factor(1:np),
Q=as.factor(1:nq),
Phase=as.factor(1:nph))
df - with (grd,
  data.frame(Pat=Pat,Q=Q,Phase=Phase,Resp = rnorm(np*nq*nph*nrep)))

score - function(x) {
 data.frame(Pat=x$Pat[1],Phase=x$Phase[1],Q=x$Q[1],score = mean(x$Resp))
}

# about 20 sec
caScore - by(df,list(Pat=df$Pat,Phase=df$Phase,Q=df$Q),FUN = score )

ca1 = do.call(rbind,caScore)
# Without lme4: 3 seconds
# With lme4: After 60 sec:
#Error: evaluation nested too deeply: infinite recursion /
#options(expressions=)?


---

platform i386-pc-mingw32
arch i386
os   mingw32
system   i386, mingw32
status
major2
minor2.1
year 2005
month12
day  20
svn rev  36812
language R

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Equal number of cuts in a contourplot with trellis

2006-01-12 Thread Deepayan Sarkar
On 1/12/06, Jesus Frias [EMAIL PROTECTED] wrote:
 Dear R-helpers,

   I need some help to produce a set of contour plots that I am
 trying to make in order to compare surfaces between the levels of a
 factor. For example:

 library(lattice)
 g - expand.grid(x = 60:100, y = 1:25, ti = c(a,b,c))
 g$z -with(g,
   (-1e-4*x-1e-3*y-1e-5*x*y)*(ti==a) +
   (1e-2*x-1e-3*y-1e-4*x*y)*(ti==b) +
   (1e-3*x-1e-3*y-1.e-5*x*y)*(ti==c)
)

 contourplot(z~ x * y|ti, data = g,
 cuts=20,
 pretty=T,
 screen = list(z = 30, x = -60))

 As you can see in the figure, most of the contour lines are in one of
 the levels and we are not able to see how the other levels look like.

 I would like to display the same number of cuts in each of the trellis.
 I can make each of the contourplots separately and control the number of
 cuts but I am not able to plot all of them in one.

The simplest solution is to recompute the levels for each panel function:

contourplot(z~ x * y|ti, data = g,
label.style = align,
panel = function(x, y, z, subscripts, at, ...) {
at - pretty(z[subscripts], 10)
panel.contourplot(x, y, z,
  subscripts = subscripts,
  at = at,
  ...)
})

Alternatively, you could pass in a suitable 'at' vector computed externally.

Deepayan
--
http://www.stat.wisc.edu/~deepayan/

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Repeated measures aov with post hoc tests?

2006-01-12 Thread Gregory Snow
The multcomp package may do what you want (there is mention of nested
variables in the help).

-- 
Gregory (Greg) L. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
[EMAIL PROTECTED]
(801) 408-8111
 
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of 
 Fredrik Karlsson
 Sent: Thursday, January 12, 2006 3:24 AM
 To: r-help@stat.math.ethz.ch
 Subject: Re: [R] Repeated measures aov with post hoc tests?
 
 Dear list,
 
 I posted the message below a cople of days ago, and have not 
 been able to find any solution to this. I do really want some help.
 
 /Fredrik
 
 On 1/10/06, Fredrik Karlsson [EMAIL PROTECTED] wrote:
  Dear list,
 
  I would like to perform an analysis on the following model:
 
  aov(ampratio ~ Type * Place * agemF + Error(speakerid/Place) 
  ,data=aspvotwork)
 
  using the approach from 
 http://www.psych.upenn.edu/~baron/rpsych/rpsych.html .
 
  Now, I got the test results, wich indicate a significant 
 interaction 
  and main effects of the agemF variable. How do I find at 
 what level of 
  agemF the effect may be found.
 
  How do I do this?
 
  I found a reference to TukeyHSD in the archives, but I 
 cannot use it:
 
   TukeyHSD(aov(ampratio ~ Type * Place * agemF + 
   Error(speakerid/Place),data=aspvotwork))
  Error in TukeyHSD(aov(ampratio ~ Type * Place * agemF + 
  Error(speakerid/Place),  :
  no applicable method for TukeyHSD
 
  Please help me.
 
  /Fredrik
 
 
 
 --
 My Gentoo + PVR-350 + IVTV + MythTV blog is on 
 http://gentoomythtv.blogspot.com/
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] infinite recursion in do.call when lme4 loaded only

2006-01-12 Thread Peter Dalgaard
Dieter Menne [EMAIL PROTECTED] writes:

 A larg program which worked with lme4/R about a year ago failed when I
 re-run it today. I reproduced the problem with the program below.
 
 -- When lme4 is not loaded, the program runs ok and fast enough
 -- When lme4 is loaded (but never used), the do.call fails
with infinite recursion after 60 seconds. Memory used increases
beyond bonds in task manager.
 -- I tested a few S3 based packages (MASS, nlme) and did not get
similar problems
 
 Current workaround: do lme4-processing in a separate program.

Looks like it conks out when the number of frames to rbind is bigger
than about 110. 

Current releases have
 options(expressions)
$expressions
[1] 1000

It was 5000 for a while, but we found that it could overflow the C
stack on some systems. Since your example has 720 lines, I can't quite
rule out that that the problem was really there all the time.

However, it surely has to do with methods dispatch:

 system.time(do.call(rbind.data.frame,caScore))
[1] 0.99 0.00 0.99 0.00 0.00

which provides you with another workaround.

 
 
 --
 #library(lme4) # uncomment this to see the problem
 np - 12
 nq - 20
 nph - 3
 nrep - 30
 grd - expand.grid(Pat=as.factor(1:np),
 Q=as.factor(1:nq),
 Phase=as.factor(1:nph))
 df - with (grd,
   data.frame(Pat=Pat,Q=Q,Phase=Phase,Resp = rnorm(np*nq*nph*nrep)))
 
 score - function(x) {
  data.frame(Pat=x$Pat[1],Phase=x$Phase[1],Q=x$Q[1],score = mean(x$Resp))
 }
 
 # about 20 sec
 caScore - by(df,list(Pat=df$Pat,Phase=df$Phase,Q=df$Q),FUN = score )
 
 ca1 = do.call(rbind,caScore)
 # Without lme4: 3 seconds
 # With lme4: After 60 sec:
 #Error: evaluation nested too deeply: infinite recursion /
 #options(expressions=)?
 
 
 ---
 
 platform i386-pc-mingw32
 arch i386
 os   mingw32
 system   i386, mingw32
 status
 major2
 minor2.1
 year 2005
 month12
 day  20
 svn rev  36812
 language R
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] extract variables from linear model

2006-01-12 Thread Jörg Schaber
Hi,

I fitted a mixed linear model y = a + b + a*b + c + error, with c being 
the random factor:
lmefit - lme(y ~ a * b - 1 , random = ~ 1 | c, na.action='na.omit')

Is there a way to omit some level combinations of the cross-term a:b? 
E.g. those that are not significant?

When I add the coefficients of a, b and a:b to get the combined effect 
without the effect of c, do then the standard errors of the coefficients 
also add?

best,

joerg

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R - Wikis and R-core

2006-01-12 Thread phgrosjean
Hello Martin and others,

I am happy with this decision. I'll look a little bit at this next week.

Best,

Philippe Grosjean

 We've had a small review time within R-core on this topic,
 amd would like to state the following:

 --
 The R-core team welcomes proposals to develop an R-wiki.

 - We would consider linking a very small number of Wikis (ideally one)
   from www.r-project.org and offering an address in the r-project.org
 domain (such as 'wiki.r-project.org').

 - The core team has no support time to offer, and would be looking for
   a medium-term commitment from a maintainer team for the Wiki(s).

 - Suggestions for the R documentation would best be filtered through the

   Wiki maintainers, who could e.g. supply suggested patches during the
 alpha  phase of an R release.
 --

 Our main concerns have been about ensuring the quality of such extra
 documentation projects, hence the 2nd point above.
 Several of our more general, not mainly R, experiences have been
 of outdated web pages which are continued to be used as
 reference when their advice has long been superseded.
 I think it's very important to try ensuring that this won't
 happen with an R Wiki.

 Martin Maechler, ETH Zurich

 PhGr == Philippe Grosjean [EMAIL PROTECTED]
 on Sun, 8 Jan 2006 17:00:44 +0100 (CET) writes:

 PhGr Hello all, Sorry for not taking part of this
 PhGr discussion earlier, and for not answering Detlef
 PhGr Steuer, Martin Maechler, and others that asked more
 PhGr direct questions to me. I am away from my office and
 PhGr my computer until the 16th of January.

 PhGr Just quick and partial answers: 1) I did not know
 PhGr about Hamburg RWiki. But I would be happy to merge
 PhGr both in one or the other way, as Detlef suggests it.

 PhGr 2) I choose DokuWiki as the best engine after a
 PhGr careful comparison of various Wiki engines. It is the
 PhGr best one, as far as I know, for the purpose of
 PhGr writting software documentation and similar
 PhGr pages. There is an extensive and clearly presented
 PhGr comparison of many Wikki engines at:
 PhGr http://www.wikimatrix.org/.

 PhGr 3) I started to change DokuWiki (addition of various
 PhGr plugins, addition of R code syntax coloring with
 PhGr GESHI, etc...). So, it goes well beyond all current
 PhGr Wiki engines regarding its suitability to present R
 PhGr stuff.

 PhGr 4) The reasons I did this is because I think the Wiki
 PhGr format could be of a wider use. I plan to change a
 PhGr little bit the DokuWiki syntax, so that it works with
 PhGr plain .R code files (Wiki part is simply embedded in
 PhGr commented lines, and the rest is recognized and
 PhGr formatted as R code by the Wiki engine). That way, the
 PhGr same Wiki document can either rendered by the Wiki
 PhGr engine for a nice presentation, or sourced in R
 PhGr indifferently.

 PhGr 5) My last idea is to add a Rpad engine to the Wiki,
 PhGr so that one could play with R code presented in the
 PhGr Wiki pages and see the effects of changes directly in
 PhGr the Wiki.

 PhGr 6) Regarding the content of the Wiki, it should be
 PhGr nice to propose to the authors of various existing
 PhGr document to put them in a Wiki form. Something like
 PhGr Statistics with R
 PhGr (http://zoonek2.free.fr/UNIX/48_R/all.html) is written
 PhGr in a way that stimulates additions to pages in
 PhGr perpetual construction, if it was presented in a Wiki
 PhGr form. It is licensed as Creative Commons
 PhGr Attribution-NonCommercial-ShareAlike 2.5 license, that
 PhGr is, exactly the same one as DokuWiki that I choose for
 PhGr R Wiki. Of course, I plan to ask its author to do so
 PhGr before putting its hundreds of very interesting pages
 PhGr on the Wiki... I think it is vital to have already
 PhGr something in the Wiki, in order to attract enough
 PhGr readers, and then enough contributors!

 PhGr 7) Regarding spamming and vandalism, DokuWiki allows
 PhGr to manage rights and users, even individually for
 PhGr pages. I think it would be fine to lock pages that
 PhGr reach a certain maturity (read-only / editable by
 PhGr selected users only) , with link to a discussion page
 PhGr which remaining freely accessible at the bottom of
 PhGr locked pages.

 PhGr 8) I would be happy to contribute this work to the R
 PhGr foundation in one way or the other to integrate it in
 PhGr http://www.r-project.org or
 PhGr http://cran.r-project.org. But if it is fine keeping
 PhGr it in http://www.sciviews.org as well, it is also fine
 PhGr for me.

 PhGr I suggest that all interested people drop a little
 PhGr email to my mailbox.  I'll recontact 

[R] Curve fitting

2006-01-12 Thread ndurand
Hi!

I have a problem of curve fitting.

I use the following data :

 - vector of predictor data : 
0
0.4
0.8
1.2
1.6

- vector of response data : 
0.81954
0.64592
0.51247
0.42831
0.35371

 I perform parametric fits using custom equations

when I use this equation :   y  =  yo + K *(1/(1+exp(-(a+b*ln(x)   the 
fitting result is OK
but when I use this more general equation :y  =  yo + K 
*(1/(1+exp(-(a+b*log(x)+c*x  , then I get an aberrant curve!

I don't understand that... The second fitting should be at least as good 
as the first one because when taking c=0, both equations are identical!

There is here a mathematical phenomenon that I don't understand!could 
someone help me

Thanks a lot in advance!

Nadège 

[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] infinite recursion in do.call when lme4 loaded only

2006-01-12 Thread Dieter Menne
Peter Dalgaard p.dalgaard at biostat.ku.dk writes:

  A larg program which worked with lme4/R about a year ago failed when I
  re-run it today. I reproduced the problem with the program below.

  -- When lme4 is loaded (but never used), the do.call fails
 with infinite recursion after 60 seconds. Memory used increases
 beyond bonds in task manager.
 
 However, it surely has to do with methods dispatch:
 
  system.time(do.call(rbind.data.frame,caScore))
 [1] 0.99 0.00 0.99 0.00 0.00
 
 which provides you with another workaround.

Peter, I had increased the optional value already, but I still don't understand 
what this recursion overflow has to do with the lm4 loading.

Dieter

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] t-test for standard deviations

2006-01-12 Thread mirko sanpietrucci
Dear R-users,
I am new to the list and I would like to submit (probably) a stupid 
question:

I found in a paper a reference to a t-test for the evaluationg the difference 
between the standard deviations of 2 samples.
This test is performed in the paper but the methodology is not explained and 
any reference is reported.

Does anyone know where I can find references to this test and if it is 
implemented in R?

Thenks in advance for your help,

Mirko 
[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread White, Charles E WRAIR-Wash DC
I am currently using R 2.1.1 under Windows and I do not seem to be able
to load the current versions of lme4/Matrix. I have run
'update.packages.' I understand this is still experimental software but
I would like access to a working version.

Thanks.

Chuck

R Output:

 library(lme4)
Loading required package: Matrix
Error in lazyLoadDBfetch(key, datafile, compressed, envhook) : 
ReadItem: unknown type 241
In addition: Warning messages:
1: package 'lme4' was built under R version 2.3.0 
2: package 'Matrix' was built under R version 2.3.0 
Error: package 'Matrix' could not be loaded


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] t-test for standard deviations

2006-01-12 Thread Ted Harding
On 12-Jan-06 mirko sanpietrucci wrote:
 Dear R-users,
 I am new to the list and I would like to submit (probably)
 a stupid question:
 
 I found in a paper a reference to a t-test for the evaluationg the
 difference between the standard deviations of 2 samples.
 This test is performed in the paper but the methodology is not
 explained and any reference is reported.
 
 Does anyone know where I can find references to this test and if it is
 implemented in R?
 
 Thenks in advance for your help,
 
 Mirko

If the paper says that a

1) t-test

was used for evaluating the difference between the

2) standard deviations

of 2 samples

then I suspect that one or the other of these is a misprint.

To compare standard deviations (more precisely, variances)
you could use a (1)F-test.

Or you would use a t-test to evaluate the difference between
the (2)means of 2 samples.

If it is really obscure what was done, perhaps an appropriate
quotation from the paper would help to ascertain the problem.

Best wishes,
Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jan-06   Time: 18:52:31
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] I think not so hard question

2006-01-12 Thread Liaw, Andy
Mark,

It's a bit unclear whether you're looking for a run of exactly 3, or at
least 3.  If it's exactly 3, what Prof. Koenker suggested (using rle())
should help you a lot.  If you want runs of at least 3, it should work
similarly as well.

 set.seed(1)
 (x - sample(c(-1, 1), 100, replace=TRUE))
  [1] -1 -1  1  1 -1  1  1  1  1 -1 -1 -1  1 -1  1 -1  1  1 -1  1  1 -1  1
-1 -1 -1
 [27] -1 -1  1 -1 -1  1 -1 -1  1  1  1 -1  1 -1  1  1  1  1  1  1 -1 -1  1
1 -1  1
 [53] -1 -1 -1 -1 -1  1  1 -1  1 -1 -1 -1  1 -1 -1  1 -1  1 -1  1 -1 -1 -1
1  1 -1
 [79]  1  1 -1  1 -1 -1  1 -1  1 -1 -1 -1 -1 -1  1  1  1  1 -1 -1  1  1
 r - rle(x)$length
 p - which(r == 3)
 idx - cumsum(r)[p] - 2
 sapply(idx, function(i) x[i:(i+2)])
 [,1] [,2] [,3] [,4]
[1,]   -11   -1   -1
[2,]   -11   -1   -1
[3,]   -11   -1   -1
 idx
[1] 10 35 62 73

Andy


From: Mark Leeds
 
 I'm sorry to bother this list so much
 But I haven't programmed in
 A while and I'm struggling.
  
 I have a vector in R of 1's and -1's
 And I want to use a streak of size Y
 To predict that the same value will
 Be next.
  
 So, suppose Y = 3. Then,  if there is a streak of three 
 ones in a row, then I will predict that the next value is
 a 1. But, if there is a streak of 3 -1's in a row,
 then I will predict that a -1 is next. Otherwise,
 I don't predict anything.
  
 I am really new to R and kind of struggling
 And I was wondering if someone could show
 how to do this ?
  
 In other words, given a vector of -1's
 And 1's,  I am unable ( I've been trying
 For 2 days ) to create a new vector that
 Has the predictions in it at the appropriate
 places ? Thanks.
  
  
 
 
 **
 This email and any files transmitted with it are 
 confidentia...{{dropped}}
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Multilevel models with mixed effects in R?

2006-01-12 Thread Charles Partridge
Group,

I am new to R.  In my work as a program evaluator, I am regularly asked
to estimate effect sizes of prevention/intervention and educational
programs on various student outcomes (e.g. academic achievement).  In
many cases, I have access to data over three or more time periods (e.g.
growth in proficiency test scores). 

I usually have multiple independent and dependent variables in each
model along with covariates.  I have historically utilized latent growth
curve structural equation models, but would like to include random
effects in the model.  Does R have the ability to run such analyses?  

Regards,

Charles R. Partridge
Evaluation Specialist
Center for Learning Excellence
The John Glenn Institute for Public Service  Public Policy
807 Kinnear Road, Room 214
Columbus, Ohio 43212-1421
Phone: 614.292.2419
FAX: 614.247.6447
Email: [EMAIL PROTECTED]
http://cle.osu.edu

CONFIDENTIALITY NOTICE: This message is intended only for th...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Curve fitting

2006-01-12 Thread Albyn Jones
You haven't told us how you are fitting the model; are you using
nls(), and if so with what initial values?  The models don't make
sense at x=0, due to the inclusion of the log(x) term.  Ignoring that,
you have 5 observations and 5 parameters in your second model. What is
the reason you are including both b*log(x) and c*x terms in the
model?  

regards

albyn
---
On Thu, Jan 12, 2006 at 07:11:12PM +0100, [EMAIL PROTECTED] wrote:
 Hi!
 
 I have a problem of curve fitting.
 
 I use the following data :
 
  - vector of predictor data : 
 0
 0.4
 0.8
 1.2
 1.6
 
 - vector of response data : 
 0.81954
 0.64592
 0.51247
 0.42831
 0.35371
 
  I perform parametric fits using custom equations
 
 when I use this equation :   y  =  yo + K *(1/(1+exp(-(a+b*ln(x)   the 
 fitting result is OK
 but when I use this more general equation :y  =  yo + K 
 *(1/(1+exp(-(a+b*log(x)+c*x  , then I get an aberrant curve!
 
 I don't understand that... The second fitting should be at least as good 
 as the first one because when taking c=0, both equations are identical!
 
 There is here a mathematical phenomenon that I don't understand!could 
 someone help me
 
 Thanks a lot in advance!
 
 Nad?ge 
 
   [[alternative HTML version deleted]]
 

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] t-test for standard deviations

2006-01-12 Thread Berton Gunter
Sorry, Ted:

Google on Brown-Forsythe and Levene's test and you will, indeed, find
that rather robust and powerful t-tests can be used for testing homogeneity
of spreads. In fact, on a variety of accounts, these tests are preferable to
F-tests, which are notoriously non-robust (sensitive to non-normality) and
which should long ago have been banned from statistics tects (IMHO).

OTOH, whether one **should** test for homogeneity of spread instead of using
statistical procedures robust to moderate heteroscedascity is another
question. IMO, and I think on theoretical grounds, that is a better way to
do things.

Best yet is to use balanced designs in which most anything you do is less
affected by any of these deviations from standard statistical assumptions.
But that requires malice aforethought, rather than data dredging ...

Cheers,
Bert

-- Bert Gunter
Genentech Non-Clinical Statistics
South San Francisco, CA
 
The business of the statistician is to catalyze the scientific learning
process.  - George E. P. Box
 
 

 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Ted Harding
 Sent: Thursday, January 12, 2006 10:53 AM
 To: mirko sanpietrucci
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] t-test for standard deviations
 
 On 12-Jan-06 mirko sanpietrucci wrote:
  Dear R-users,
  I am new to the list and I would like to submit (probably)
  a stupid question:
  
  I found in a paper a reference to a t-test for the evaluationg the
  difference between the standard deviations of 2 samples.
  This test is performed in the paper but the methodology is not
  explained and any reference is reported.
  
  Does anyone know where I can find references to this test 
 and if it is
  implemented in R?
  
  Thenks in advance for your help,
  
  Mirko
 
 If the paper says that a
 
 1) t-test
 
 was used for evaluating the difference between the
 
 2) standard deviations
 
 of 2 samples
 
 then I suspect that one or the other of these is a misprint.
 
 To compare standard deviations (more precisely, variances)
 you could use a (1)F-test.
 
 Or you would use a t-test to evaluate the difference between
 the (2)means of 2 samples.
 
 If it is really obscure what was done, perhaps an appropriate
 quotation from the paper would help to ascertain the problem.
 
 Best wishes,
 Ted.
 
 
 E-Mail: (Ted Harding) [EMAIL PROTECTED]
 Fax-to-email: +44 (0)870 094 0861
 Date: 12-Jan-06   Time: 18:52:31
 -- XFMail --
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] bandwidth - Hmise.mixt - ks-package

2006-01-12 Thread mk90-40
Hello,

I want to use Hmise.mixt for finding the optimal bandwidth. To be honest, I
have only poor knowledge of the mathematical background. Using the
help-file, I wrote:

 hopt-Hmise.mixt(c(0.0,1.5),c(1,1/9),c(0.75,0.25),100,0.4)

but got the message:

Error in ((i - 1) * d + 1):(i * d) : NA/NaN Argument

Has anyone an idea, what's wrong?

Thanks!
Mala

--

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Basis of fisher.test

2006-01-12 Thread Ted Harding
I want to ascertain the basis of the table ranking,
i.e. the meaning of extreme, in Fisher's Exact Test
as implemented in 'fisher.test', when applied to RxC
tables which are larger than 2x2.

One can summarise a strategy for the test as

1) For each table compatible with the margins
   of the observed table, compute the probability
   of this table conditional on the marginal totals.

2) Rank the possible tables in order of a measure
   of discrepancy between the table and the null
   hypothesis of no association.

3) Locate the observed table, and compute the sum
   of the probabilties, computed in (1), for this
   table and more extreme tables in the sense of
   the ranking in (2).

The question is: what measure of discrepancy is
used in 'fisher.test' corresponding to stage (2)?

(There are in principle several possibilities, e.g.
value of a Pearson chi-squared, large values being
discrepant; the probability calculated in (2),
small values being discrepant; ... )

?fisher.test says only:

 In the one-sided 2 by 2 cases, p-values are obtained
 directly using the hypergeometric distribution.
 Otherwise, computations are based on a C version of
 the FORTRAN subroutine FEXACT which implements the
 network developed by Mehta and Patel (1986) and
 improved by Clarkson, Fan  Joe (1993). The FORTRAN
 code can be obtained from
 URL: http://www.netlib.org/toms/643.

I have had a look at this FORTRAN code, and cannot ascertain
it from the code itself. However, there is a Comment to the
effect:

c PRE- Table p-value.  (Output)
c  PRE is the probability of a more extreme table, where
c  'extreme' is in a probabilistic sense.

which suggests that the tables are ranked in order of their
probabilities as computed in (2).

Can anyone confirm definitively what goes on?

With thanks,
Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jan-06   Time: 20:19:02
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Multilevel models with mixed effects in R?

2006-01-12 Thread Doran, Harold
Yes, there are now multiple functions. One is the lmer() function in the
matrix package. Another is in the nlme package and is the lme()
function. Lmer is the newer version and the syntax has changed just
slightly.  To see samples of the lmer function type the following at
your R command prompt

 library(mlmRev)
 vignette(MlmSoftRev)

This will open a pdf file with examples. You'll need to make sure to
obtain the mlmRev package from cran.

You can also see examples of student achievement analyses using these
functions in the following papers

http://cran.r-project.org/doc/Rnews/Rnews_2003-3.pdf
http://cran.r-project.org/doc/Rnews/Rnews_2005-1.pdf 

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Charles Partridge
Sent: Thursday, January 12, 2006 2:54 PM
To: r-help@stat.math.ethz.ch
Subject: [R] Multilevel models with mixed effects in R?

Group,

I am new to R.  In my work as a program evaluator, I am regularly asked
to estimate effect sizes of prevention/intervention and educational
programs on various student outcomes (e.g. academic achievement).  In
many cases, I have access to data over three or more time periods (e.g.
growth in proficiency test scores). 

I usually have multiple independent and dependent variables in each
model along with covariates.  I have historically utilized latent growth
curve structural equation models, but would like to include random
effects in the model.  Does R have the ability to run such analyses?  

Regards,

Charles R. Partridge
Evaluation Specialist
Center for Learning Excellence
The John Glenn Institute for Public Service  Public Policy
807 Kinnear Road, Room 214
Columbus, Ohio 43212-1421
Phone: 614.292.2419
FAX: 614.247.6447
Email: [EMAIL PROTECTED]
http://cle.osu.edu

CONFIDENTIALITY NOTICE: This message is intended only for\ t...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Convert matrix to data.frame

2006-01-12 Thread Tony Plate
When I try converting a matrix to a data frame, it works for me:

  x - matrix(1:6,ncol=2,dimnames=list(LETTERS[1:3],letters[24:25]))
  data.frame(x)
   x y
A 1 4
B 2 5
C 3 6
  str(data.frame(x))
`data.frame':   3 obs. of  2 variables:
  $ x: int  1 2 3
  $ y: int  4 5 6
 

You can also use as.data.frame() to convert a matrix to a data.frame 
(but note that if colnames are missing form the matrix, as.data.frame() 
  constructs different colnames than does data.frame().

You say it didn't work -- it's difficult to help with such a 
non-specific complaint.  Can you explain exactly how it didn't work for 
you?  (e.g., show the exact error message).

-- Tony Plate

Chia, Yen Lin wrote:
 Hi all,
 
  
 
 I wonder how could I convert a matrix A to a dataframe such that
 whenever I'm running a linear model such lme, I can use A$x1?  I tried
 data.frame(A), it didn't work.  Should I initialize A not as a matrix?
 Thanks.
 
  
 
 Yen Lin
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Firths bias correction for log-linear models

2006-01-12 Thread sdfrost
Dear R-Help List,

I'm trying to implement Firth's (1993) bias correction for log-linear models.
Firth (1993) states that such a correction can be implemented by supplementing
the data with a function of h_i, the diagonals from the hat matrix, but doesn't
provide further details. I can see that for a saturated log-linear model,  h_i=1
for all i, hence one just adds 1/2 to each count, which is equivalent to the
Jeffrey's prior, but I'd also like to get bias corrected estimates for other log
linear models. It appears that I need to iterate using GLM, with the weights
option and h_i, which I can get from the function hatvalues. For logistic
regression, this can be performed by splitting up each observation into response
and nonresponse, and using weights as described in Heinze, G. and Schemper, M.
(2002), but I'm unsure of how to implement the analogue for log-linear models. A
procedure using IWLS is described by Firth (1992) in Dodge and Whittaker (1992),
but this book isn't in the local library, and its $141+ on Amazon. I've tried
looking at the code in the logistf and brlr libraries, but I haven't had any
(successful) ideas. Can anyone help me in describing how to implement this in R?

Thanks!
Simon

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread Spencer Graves
  Did you try upgrading to R 2.2.1?  I just installed R 2.2.1 with the 
latest version of lme4 and Matrix, and they loaded fine for me.
  library(lme4)
Loading required package: Matrix
Loading required package: lattice
  sessionInfo()
R version 2.2.1, 2005-12-20, i386-pc-mingw32

attached base packages:
[1] methods   stats graphics  grDevices utils datasets
[7] base

other attached packages:
  lme4   latticeMatrix
  0.98-1 0.12-11  0.99-6

  Running Windows XP.  (This may be obvious from 'i386-pc-mingw23', but 
I'm no sufficiently aware to know.)

  Doug Bates has worked very hard to create this, and I for one would 
not want to ask him to take the time to make it backward compatible for 
those who for whatever reason haven't yet upgraded to R 2.2.1, 
especially since for most of us it is fairly easy to upgrade.

  Spencer Graves

White, Charles E WRAIR-Wash DC wrote:
 I am currently using R 2.1.1 under Windows and I do not seem to be able
 to load the current versions of lme4/Matrix. I have run
 'update.packages.' I understand this is still experimental software but
 I would like access to a working version.
 
 Thanks.
 
 Chuck
 
 R Output:
 
 
library(lme4)
 
 Loading required package: Matrix
 Error in lazyLoadDBfetch(key, datafile, compressed, envhook) : 
 ReadItem: unknown type 241
 In addition: Warning messages:
 1: package 'lme4' was built under R version 2.3.0 
 2: package 'Matrix' was built under R version 2.3.0 
 Error: package 'Matrix' could not be loaded
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] I think not so hard question

2006-01-12 Thread Mark Leeds
I'm sorry to bother this list so much
But I haven't programmed in
A while and I'm struggling.
 
I have a vector in R of 1's and -1's
And I want to use a streak of size Y
To predict that the same value will
Be next.
 
So, suppose Y = 3. Then,  if there is a streak of three 
ones in a row, then I will predict that the next value is
a 1. But, if there is a streak of 3 -1's in a row,
then I will predict that a -1 is next. Otherwise,
I don't predict anything.
 
I am really new to R and kind of struggling
And I was wondering if someone could show
how to do this ?
 
In other words, given a vector of -1's
And 1's,  I am unable ( I've been trying
For 2 days ) to create a new vector that
Has the predictions in it at the appropriate
places ? Thanks.
 
 


**
This email and any files transmitted with it are confidentia...{{dropped}}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] t-test for standard deviations

2006-01-12 Thread Ted Harding

On 12-Jan-06 Berton Gunter wrote:
 Sorry, Ted:
 
 Google on Brown-Forsythe and Levene's test and you will,
 indeed, find that rather robust and powerful t-tests can be
 used for testing homogeneity of spreads. In fact, on a variety
 of accounts, these tests are preferable to F-tests, which are
 notoriously non-robust (sensitive to non-normality) and
 which should long ago have been banned from statistics tects (IMHO).

Not sure that I would consider either of these as a t-test
as usually undestood.
.
Both are based on deriving a dispersion variable transform
of the observations in each group (dquared or absolute deviation
from the mean for Levene, absolute deivation from the mean for
Brown-Forsythe), and performing an analysis of variance with
the derived variable.

Granted, in the case of two groups the ANOVA is equivalent to
a squared t-test and one could indeed use a t-test instead
of a 2-group ANOVA to get the directional information as well.

But I would be surprised to find such a procedure referred to
as a t-test as cited by Mirko. I think it would help if he
told us a bit more about what the paper actually says.

 OTOH, whether one **should** test for homogeneity of spread
 instead of using statistical procedures robust to moderate
 heteroscedascity is another question. IMO, and I think on
 theoretical grounds, that is a better way to do things.
 
 Best yet is to use balanced designs in which most anything
 you do is less affected by any of these deviations from standard
 statistical assumptions.
 But that requires malice aforethought, rather than data dredging ...

Your comments on the merits and advisability of these things
are good -- but not forgetting that it is also a good idea to
have enough understanding of what one is dealing with to be
able to judge what might be best for the case in hand. However,
I'm entirely with you when it comes to uncircumspect habitual
use of standard procedures.

Best wishes,
Ted.

 
 Cheers,
 Bert
 
 -- Bert Gunter
 Genentech Non-Clinical Statistics
 South San Francisco, CA
  
 The business of the statistician is to catalyze the scientific
 learning
 process.  - George E. P. Box
  
  
 
 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Ted Harding
 Sent: Thursday, January 12, 2006 10:53 AM
 To: mirko sanpietrucci
 Cc: r-help@stat.math.ethz.ch
 Subject: Re: [R] t-test for standard deviations
 
 On 12-Jan-06 mirko sanpietrucci wrote:
  Dear R-users,
  I am new to the list and I would like to submit (probably)
  a stupid question:
  
  I found in a paper a reference to a t-test for the evaluationg the
  difference between the standard deviations of 2 samples.
  This test is performed in the paper but the methodology is not
  explained and any reference is reported.
  
  Does anyone know where I can find references to this test 
 and if it is
  implemented in R?
  
  Thenks in advance for your help,
  
  Mirko
 
 If the paper says that a
 
 1) t-test
 
 was used for evaluating the difference between the
 
 2) standard deviations
 
 of 2 samples
 
 then I suspect that one or the other of these is a misprint.
 
 To compare standard deviations (more precisely, variances)
 you could use a (1)F-test.
 
 Or you would use a t-test to evaluate the difference between
 the (2)means of 2 samples.
 
 If it is really obscure what was done, perhaps an appropriate
 quotation from the paper would help to ascertain the problem.
 
 Best wishes,
 Ted.
 
 
 E-Mail: (Ted Harding) [EMAIL PROTECTED]
 Fax-to-email: +44 (0)870 094 0861
 Date: 12-Jan-06   Time: 18:52:31
 -- XFMail --
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! 
 http://www.R-project.org/posting-guide.html

 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jan-06   Time: 21:09:19
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] bug with mai , pdf, and heatmap ?

2006-01-12 Thread Ken Termiso

Hi all,

When using heatmap() with a pdf driver, and specifying parameters for mai in 
heatmap, I get a printout of the mai parameters at the top of the pdf...see 
attachment.


This is on win2k pro with R2.2.1

Thanks,
Ken

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Convert matrix to data.frame

2006-01-12 Thread Chia, Yen Lin
Hi all,

 

I wonder how could I convert a matrix A to a dataframe such that
whenever I'm running a linear model such lme, I can use A$x1?  I tried
data.frame(A), it didn't work.  Should I initialize A not as a matrix?
Thanks.

 

Yen Lin


[[alternative HTML version deleted]]

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] February course *** R/Splus Fundamentals and Programming Techniques

2006-01-12 Thread elvis

   XLSolutions Corporation ([1]www.xlsolutions-corp.com) is proud to
   announce  2-day R/S-plus Fundamentals and Programming
   Techniques in San Francisco: [2]www.xlsolutions-corp.com/Rfund.htm
    San Francisco,   February 16-17

    Seattle,February 20-21

    Boston,February 23-24

    New York, February 27-28
   Reserve your seat now at the early bird rates! Payment due AFTER
   the class
   Course Description:
   This two-day beginner to intermediate R/S-plus course focuses on a
   broad spectrum of topics, from reading raw data to a comparison of R
   and S. We will learn the essentials of data manipulation, graphical
   visualization and R/S-plus programming. We will explore statistical
   data analysis tools,including graphics with data sets. How to enhance
   your plots, build your own packages (librairies) and connect via
   ODBC,etc.
   We will perform some statistical modeling and fit linear regression
   models. Participants are encouraged to bring data for interactive
   sessions
   With the following outline:
   - An Overview of R and S
   - Data Manipulation and Graphics
   - Using Lattice Graphics
   - A Comparison of R and S-Plus
   - How can R Complement SAS?
   - Writing Functions
   - Avoiding Loops
   - Vectorization
   - Statistical Modeling
   - Project Management
   - Techniques for Effective use of R and S
   - Enhancing Plots
   - Using High-level Plotting Functions
   - Building and Distributing Packages (libraries)
   - Connecting; ODBC, Rweb, Orca via sockets and via Rjava
   Email us for group discounts.
   Email Sue Turner: [EMAIL PROTECTED]
   Phone: 206-686-1578
   Visit us: [4]www.xlsolutions-corp.com/training.htm
   Please let us know if you and your colleagues are interested in this
   classto take advantage of group discount. Register now to secure your
   seat!
   Interested in R/Splus Advanced course? email us.
   Cheers,
   Elvis Miller, PhD
   Manager Training.
   XLSolutions Corporation
   206 686 1578
   [5]www.xlsolutions-corp.com
   [EMAIL PROTECTED]

References

   1. http://www.xlsolutions-corp.com/
   2. http://www.xlsolutions-corp.com/Rfund.htm
   3. http://email.secureserver.net/view.php?folder=INBOXuid=2791#Compose
   4. http://www.xlsolutions-corp.com/training.htm
   5. http://www.xlsolutions-corp.com/
   6. http://email.secureserver.net/view.php?folder=INBOXuid=2791#Compose
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basis of fisher.test

2006-01-12 Thread Peter Dalgaard
(Ted Harding) [EMAIL PROTECTED] writes:

 I want to ascertain the basis of the table ranking,
 i.e. the meaning of extreme, in Fisher's Exact Test
 as implemented in 'fisher.test', when applied to RxC
 tables which are larger than 2x2.
 
 One can summarise a strategy for the test as
 
 1) For each table compatible with the margins
of the observed table, compute the probability
of this table conditional on the marginal totals.
 
 2) Rank the possible tables in order of a measure
of discrepancy between the table and the null
hypothesis of no association.
 
 3) Locate the observed table, and compute the sum
of the probabilties, computed in (1), for this
table and more extreme tables in the sense of
the ranking in (2).
 
 The question is: what measure of discrepancy is
 used in 'fisher.test' corresponding to stage (2)?
 
 (There are in principle several possibilities, e.g.
 value of a Pearson chi-squared, large values being
 discrepant; the probability calculated in (2),
 small values being discrepant; ... )
 
 ?fisher.test says only:
 
  In the one-sided 2 by 2 cases, p-values are obtained
  directly using the hypergeometric distribution.
  Otherwise, computations are based on a C version of
  the FORTRAN subroutine FEXACT which implements the
  network developed by Mehta and Patel (1986) and
  improved by Clarkson, Fan  Joe (1993). The FORTRAN
  code can be obtained from
  URL: http://www.netlib.org/toms/643.
 
 I have had a look at this FORTRAN code, and cannot ascertain
 it from the code itself. However, there is a Comment to the
 effect:
 
 c PRE- Table p-value.  (Output)
 c  PRE is the probability of a more extreme table, where
 c  'extreme' is in a probabilistic sense.
 
 which suggests that the tables are ranked in order of their
 probabilities as computed in (2).
 
 Can anyone confirm definitively what goes on?

To my knowledge, it is the table probability, according to the
hypergeometric distribution, i.e. the probability of the table given
the marginals, which can be translated to sampling a+b balls without
replacement from a box with a+c white and b+d black balls. 

Playing around with dhyper should be instructive.

(You're right that the two-sided p values are obtained by summing
all smaller or equal table probabilities. This is the traditional way,
but there are alternatives, e.g. tail balancing.)

-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] bug with mai , pdf, and heatmap ?

2006-01-12 Thread Sundar Dorai-Raj

Ken Termiso wrote:
 Hi all,
 
 When using heatmap() with a pdf driver, and specifying parameters for 
 mai in heatmap, I get a printout of the mai parameters at the top of the 
 pdf...see attachment.
 
 This is on win2k pro with R2.2.1
 
 Thanks,
 Ken
 
 

Not a bug since ?heatmap has a main argument. Thus,

heatmap(..., mai = c(1,2,3,4))

is actually interpretted as

heatmap(..., main = c(1,2,3,4))

due to R's partial matching of the argument list. You should try:

pdf()
par(mai = c(1,2,3,4))
heatmap(...)
dev.off()

Hopefully I've assessed this correctly.

HTH,

--sundar

P.S. See the posting guide regarding attachments to the list.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread White, Charles E WRAIR-Wash DC
I blew away my existing R directory structure, reinstalled R 2.2.1 from
scratch, downloaded lme4 and associated packages from scratch, tried to
load lme4 and got the same error message. Again, I am using the Windows
version of R on XP.

Chuck

R : Copyright 2005, The R Foundation for Statistical Computing
Version 2.2.1  (2005-12-20 r36812)
ISBN 3-900051-07-0

 library(lme4)
Loading required package: Matrix
Error in lazyLoadDBfetch(key, datafile, compressed, envhook) : 
ReadItem: unknown type 241
In addition: Warning messages:
1: package 'lme4' was built under R version 2.3.0 
2: package 'Matrix' was built under R version 2.3.0 
Error: package 'Matrix' could not be loaded


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread Peter Dalgaard
White, Charles E WRAIR-Wash DC [EMAIL PROTECTED] writes:

 I blew away my existing R directory structure, reinstalled R 2.2.1 from
 scratch, downloaded lme4 and associated packages from scratch, tried to
 load lme4 and got the same error message. Again, I am using the Windows
 version of R on XP.

Which CRAN mirror? The Statlib one has been acting up lately.


-- 
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - ([EMAIL PROTECTED])  FAX: (+45) 35327907

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Firths bias correction for log-linear models

2006-01-12 Thread David Firth
On 12 Jan 2006, at 20:54, [EMAIL PROTECTED] wrote:

 Dear R-Help List,

 I'm trying to implement Firth's (1993) bias correction for log-linear 
 models.
 Firth (1993) states that such a correction can be implemented by 
 supplementing
 the data with a function of h_i, the diagonals from the hat matrix, 
 but doesn't
 provide further details. I can see that for a saturated log-linear 
 model,  h_i=1
 for all i, hence one just adds 1/2 to each count, which is equivalent 
 to the
 Jeffrey's prior, but I'd also like to get bias corrected estimates for 
 other log
 linear models. It appears that I need to iterate using GLM, with the 
 weights
 option and h_i, which I can get from the function hatvalues. For 
 logistic
 regression, this can be performed by splitting up each observation 
 into response
 and nonresponse, and using weights as described in Heinze, G. and 
 Schemper, M.
 (2002), but I'm unsure of how to implement the analogue for log-linear 
 models. A
 procedure using IWLS is described by Firth (1992) in Dodge and 
 Whittaker (1992),
 but this book isn't in the local library, and its $141+ on Amazon. 
 I've tried
 looking at the code in the logistf and brlr libraries, but I haven't 
 had any
 (successful) ideas. Can anyone help me in describing how to implement 
 this in R?

I don't recommend the adjusted IWLS approach in practice, because that 
algorithm is only first-order convergent.  It is mainly of theoretical 
interest.

The brlr function (in the brlr package) provides a template for a more 
direct approach in practice.  The central operation there is an 
application of optim(), with objective function
  - (l + (0.5 * log(detinfo)))
in which l is the log likelihood and detinfo is the determinant of the 
Fisher information matrix.  In the case of a Poisson log-linear model, 
the Fisher information is, using standard GLM-type notation, t(X) %*% 
diag(mu) %*% X.  It is straightforward to differentiate this penalized 
log-likelihood function, so (as in brlr) derivatives can be supplied 
for use with a second-order convergent optim() algorithm such as BFGS 
(see ?optim for a reference on the algorithm).

I hope that helps.  Please feel free to contact me off the list if 
anything is unclear.

Kind regards,
David

--
Professor David Firth
http://www.warwick.ac.uk/go/dfirth

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Curve fitting

2006-01-12 Thread Ben Bolker
 ndurand at fr.abx.fr writes:

 
 Hi!
 
 I have a problem of curve fitting.

 
  I perform parametric fits using custom equations
 
 when I use this equation :   y  =  yo + K *(1/(1+exp(-(a+b*ln(x)   the 
 fitting result is OK
 but when I use this more general equation :y  =  yo + K 
 *(1/(1+exp(-(a+b*log(x)+c*x  , then I get an aberrant curve!
 
 I don't understand that... The second fitting should be at least as good 
 as the first one because when taking c=0, both equations are identical!
 

  Can you specify *exactly* what R code you're using?
Are you using nls()?

  You're trying to fit a five-parameter model to
five data points, which is likely to be difficult if
not impossible to do statistically.  Furthermore, your
data points don't have very much information in them
about all the parameters you're trying to estimate --
they are steadily decreasing, with very mild
curvature.  Finally, if these data happen to be
points that you have generated as theoretical
values, without adding noise, nls will give you
problems (see ?nls).

  If you give us more detail about what you're trying
to do we might be able to help (or possibly tell you
that it really can't work ...)

  Ben Bolker

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] edit.data.frame

2006-01-12 Thread Fredrik Lundgren
Dear list,

Sometimes I have huge data.frames and the small spreadsheetlike 
edit.data.frame is quite handy to get an overview of the data. However, 
when I close the editor all data are rolled over the console window, 
which takes time and clutters the window. Is there a way to avoid this?

Fredrik

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Data with no separator

2006-01-12 Thread Jeffrey T. Steedle
I have data in which each row consists of a long string of number,
letters, symbols, and blank spaces.  I would like to simply scan in
strings of length 426, but R takes the spaces that occur in the data as
separators.  Is there any way around this?

Thanks,
Jeff Steedle 

-- 
Jeffrey T. Steedle ([EMAIL PROTECTED])
Psychological Studies in Education
Stanford University School of Education

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread Spencer Graves
  It worked fine for me using
http://cran.fhcrc.org/  Fred Hutchinson Cancer Research Center, Seattle, 
WA

  spencer graves

Peter Dalgaard wrote:

 White, Charles E WRAIR-Wash DC [EMAIL PROTECTED] writes:
 
 
I blew away my existing R directory structure, reinstalled R 2.2.1 from
scratch, downloaded lme4 and associated packages from scratch, tried to
load lme4 and got the same error message. Again, I am using the Windows
version of R on XP.
 
 
 Which CRAN mirror? The Statlib one has been acting up lately.
 


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Data with no separator

2006-01-12 Thread Ted Harding
On 12-Jan-06 Jeffrey T. Steedle wrote:
 I have data in which each row consists of a long string of number,
 letters, symbols, and blank spaces.  I would like to simply scan in
 strings of length 426, but R takes the spaces that occur in the data as
 separators.  Is there any way around this?
 
 Thanks,
 Jeff Steedle 

You could use readLines(), perhaps?

  Data-readLines(datafile)

should give you a vector Data of which each element is a
character string which is one line read from your datafile.

Best wishes,
Ted.


E-Mail: (Ted Harding) [EMAIL PROTECTED]
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jan-06   Time: 22:46:39
-- XFMail --

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread White, Charles E WRAIR-Wash DC

I made a typographical error, since I am running R 2.2.1. I wouldn't
dream of asking Professor Bates to make things backwards compatible to
an old version. Since the packages are loading fine for you, I will
assume there was a download error with the packages I have, uninstall
them, and try again.

Chuck

-Original Message-
From: Spencer Graves [mailto:[EMAIL PROTECTED] 
Sent: Thursday, January 12, 2006 4:08 PM
To: White, Charles E WRAIR-Wash DC
Cc: r-help@stat.math.ethz.ch; Douglas Bates
Subject: Re: [R] CRAN versions of lme4/Matrix don't appear to work with
R 2.1.1

  Did you try upgrading to R 2.2.1?  I just installed R 2.2.1
with the 
latest version of lme4 and Matrix, and they loaded fine for me.


  Doug Bates has worked very hard to create this, and I for one
would 
not want to ask him to take the time to make it backward compatible for 
those who for whatever reason haven't yet upgraded to R 2.2.1, 
especially since for most of us it is fairly easy to upgrade.

  Spencer Graves

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread White, Charles E WRAIR-Wash DC
1) It was the Statlib mirror from which I was downloading. If I don't get any 
more interesting messages before I return to work in the morning, I'll try 
installing off of a different mirror.

2) On general principles, I just updated the lme4 related packages on the SuSE 
10 machine that I run at home (NC mirror). That worked fine.

Chuck

-Original Message-
From: [EMAIL PROTECTED] on behalf of Peter Dalgaard
Sent: Thu 1/12/2006 4:56 PM

Which CRAN mirror? The Statlib one has been acting up lately.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread Mark Andersen
Hi, all, 

This is interesting, since the problem I posted on a week ago was resolved
by downloading (the current versions of) R and ctv from the Austria site;
the UCLA site, from which I had downloaded before, had an old version of R
labeled as the current version, and it was not compatible with the version
of ctv that they had.

Moral of story: It's not just the Statlib site that is (or at least can be)
unreliable.

Is it common knowledge that there are other mirrors which do not provide a
very good reflection?

Regards,
Mark A.

Dr. Mark C. Andersen
Associate Professor
Department of Fishery and Wildlife Sciences
New Mexico State University
Las Cruces NM 88003-0003
phone: 505-646-8034
fax: 505-646-1281

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of White, Charles E
WRAIR-Wash DC
Sent: Thursday, January 12, 2006 3:43 PM
To: Peter Dalgaard
Cc: Douglas Bates; r-help@stat.math.ethz.ch; Spencer Graves
Subject: Re: [R] CRAN versions of lme4/Matrix don't appear to work with R
2.1.1

1) It was the Statlib mirror from which I was downloading. If I don't get
any more interesting messages before I return to work in the morning, I'll
try installing off of a different mirror.

2) On general principles, I just updated the lme4 related packages on the
SuSE 10 machine that I run at home (NC mirror). That worked fine.

Chuck

-Original Message-
From: [EMAIL PROTECTED] on behalf of Peter Dalgaard
Sent: Thu 1/12/2006 4:56 PM

Which CRAN mirror? The Statlib one has been acting up lately.


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] jointprior in deal package

2006-01-12 Thread Spencer Graves
  In case you haven't already solved this problem (I have seen no 
replies to this post), I will offer a suggestion.  First, I've never 
used the deal package.  I installed it and tried the example provided 
with the documentation for jointprior.  It seemed to return something 
sensible -- certainly NOT an error message.

  Have you considered making a local copy of jointprior, then 
invoking 'debug(jointprior)', then walking trough the function line by 
line (as described in teh 'debug' documentation)?  If you do this, you 
will find exactly the place the command that generates the error 
message.  The 'debug' procedure also allows you to look at any object in 
the local environment created by jointprior.  By doing this, you might 
get a better idea of the problem.

  Another alternative would be to consider the differences between the 
example provided with the documentation and your tor.nw.  You may be 
able to identify the problem from that.  If that failed, I would then 
try to modify tor.nw to produce the simplest possible example I could 
think of that would still produce the error message.  In the course of 
doing that, you may be able to resolve the issue.  If not, if you send 
your simplest possible example to this list, someone else might be able 
to help you.

  A reproducible example is nearly always easier to diagnose than a 
relatively vague description like you provided.  To get an answer to the 
question that you asked, (a) your question must reach someone who has 
used the deal package and knows that specific error message and (b) that 
person must have the time and interest to respond.  The probability of 
that happening may be quite low.  By contrast, if you submit a toy 
example that doesn't quite work, anyone interested in playing a few 
minutes with the deal package can study your example and possibly take 
it to the next step.

  In general, I believe that providing a simple reproducible example 
probably increases by a couple of orders of magnitude the odds of 
receiving a useful answer quickly.

  hope this helps.
  spencer graves
  PLEASE do read the posting guide! 
http://www.R-project.org/posting-guide.html

Tim Smits wrote:

 Dear all,
 
 I recently started using the deal package for learning Bayesian 
 networks. When using the jointprior function on a particular dataset, I 
 get the following message:
  tor.prior-jointprior(tor.nw)
 Error in array(1, Dim) : 'dim' specifies too large an array
 
 What is the problem? How can I resolve it?
 
 Thanks,
 Tim
 
 Disclaimer: http://www.kuleuven.be/cwis/email_disclaimer.htm
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] R - Wikis and R-core

2006-01-12 Thread Henrik Bengtsson
[EMAIL PROTECTED] wrote:
 Hello Martin and others,
 
 I am happy with this decision. I'll look a little bit at this next week.
 
 Best,
 
 Philippe Grosjean
 
 
We've had a small review time within R-core on this topic,
amd would like to state the following:

--
The R-core team welcomes proposals to develop an R-wiki.

- We would consider linking a very small number of Wikis (ideally one)
  from www.r-project.org and offering an address in the r-project.org
domain (such as 'wiki.r-project.org').

- The core team has no support time to offer, and would be looking for
  a medium-term commitment from a maintainer team for the Wiki(s).

- Suggestions for the R documentation would best be filtered through the

  Wiki maintainers, who could e.g. supply suggested patches during the
alpha  phase of an R release.
--

Our main concerns have been about ensuring the quality of such extra
documentation projects, hence the 2nd point above.
Several of our more general, not mainly R, experiences have been
of outdated web pages which are continued to be used as
reference when their advice has long been superseded.
I think it's very important to try ensuring that this won't
happen with an R Wiki.

[Tried to send the following a few days ago, but had a problem with my 
connection:]

What about adding a best before date on Wiki pages and let moderators 
extend such dates (by a simple click).  If the date for a page is not 
updated, there will be a warning on that page telling the reader that 
the content might not be fully valid.

MediaWiki is a good solution because there you can write equations in 
LaTeX, which are generated as Math-ML(?) and/or bitmap images. This 
feature might be in other wiki system too, I don't know.

That's my $0.02

Henrik

Martin Maechler, ETH Zurich


PhGr == Philippe Grosjean [EMAIL PROTECTED]
on Sun, 8 Jan 2006 17:00:44 +0100 (CET) writes:

PhGr Hello all, Sorry for not taking part of this
PhGr discussion earlier, and for not answering Detlef
PhGr Steuer, Martin Maechler, and others that asked more
PhGr direct questions to me. I am away from my office and
PhGr my computer until the 16th of January.

PhGr Just quick and partial answers: 1) I did not know
PhGr about Hamburg RWiki. But I would be happy to merge
PhGr both in one or the other way, as Detlef suggests it.

PhGr 2) I choose DokuWiki as the best engine after a
PhGr careful comparison of various Wiki engines. It is the
PhGr best one, as far as I know, for the purpose of
PhGr writting software documentation and similar
PhGr pages. There is an extensive and clearly presented
PhGr comparison of many Wikki engines at:
PhGr http://www.wikimatrix.org/.

PhGr 3) I started to change DokuWiki (addition of various
PhGr plugins, addition of R code syntax coloring with
PhGr GESHI, etc...). So, it goes well beyond all current
PhGr Wiki engines regarding its suitability to present R
PhGr stuff.

PhGr 4) The reasons I did this is because I think the Wiki
PhGr format could be of a wider use. I plan to change a
PhGr little bit the DokuWiki syntax, so that it works with
PhGr plain .R code files (Wiki part is simply embedded in
PhGr commented lines, and the rest is recognized and
PhGr formatted as R code by the Wiki engine). That way, the
PhGr same Wiki document can either rendered by the Wiki
PhGr engine for a nice presentation, or sourced in R
PhGr indifferently.

PhGr 5) My last idea is to add a Rpad engine to the Wiki,
PhGr so that one could play with R code presented in the
PhGr Wiki pages and see the effects of changes directly in
PhGr the Wiki.

PhGr 6) Regarding the content of the Wiki, it should be
PhGr nice to propose to the authors of various existing
PhGr document to put them in a Wiki form. Something like
PhGr Statistics with R
PhGr (http://zoonek2.free.fr/UNIX/48_R/all.html) is written
PhGr in a way that stimulates additions to pages in
PhGr perpetual construction, if it was presented in a Wiki
PhGr form. It is licensed as Creative Commons
PhGr Attribution-NonCommercial-ShareAlike 2.5 license, that
PhGr is, exactly the same one as DokuWiki that I choose for
PhGr R Wiki. Of course, I plan to ask its author to do so
PhGr before putting its hundreds of very interesting pages
PhGr on the Wiki... I think it is vital to have already
PhGr something in the Wiki, in order to attract enough
PhGr readers, and then enough contributors!

PhGr 7) Regarding spamming and vandalism, DokuWiki allows
PhGr to manage rights and users, even individually for
PhGr pages. I think it would be fine to lock pages that
PhGr reach a certain maturity (read-only / editable by
PhGr selected users only) , with link to a 

Re: [R] CRAN versions of lme4/Matrix don't appear to work with R 2.1.1

2006-01-12 Thread Spencer Graves
  I don't know, but I had a similar problem a few weeks ago with the 
one or both of the California sites (I don't remember which now).  The 
problems disappeared after I switched to http://cran.fhcrc.org/; (Fred 
Hutchinson Cancer Research Center, Seattle, WA).

  spencer graves

Mark Andersen wrote:

 Hi, all, 
 
 This is interesting, since the problem I posted on a week ago was resolved
 by downloading (the current versions of) R and ctv from the Austria site;
 the UCLA site, from which I had downloaded before, had an old version of R
 labeled as the current version, and it was not compatible with the version
 of ctv that they had.
 
 Moral of story: It's not just the Statlib site that is (or at least can be)
 unreliable.
 
 Is it common knowledge that there are other mirrors which do not provide a
 very good reflection?
 
 Regards,
 Mark A.
 
 Dr. Mark C. Andersen
 Associate Professor
 Department of Fishery and Wildlife Sciences
 New Mexico State University
 Las Cruces NM 88003-0003
 phone: 505-646-8034
 fax: 505-646-1281
 
 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of White, Charles E
 WRAIR-Wash DC
 Sent: Thursday, January 12, 2006 3:43 PM
 To: Peter Dalgaard
 Cc: Douglas Bates; r-help@stat.math.ethz.ch; Spencer Graves
 Subject: Re: [R] CRAN versions of lme4/Matrix don't appear to work with R
 2.1.1
 
 1) It was the Statlib mirror from which I was downloading. If I don't get
 any more interesting messages before I return to work in the morning, I'll
 try installing off of a different mirror.
 
 2) On general principles, I just updated the lme4 related packages on the
 SuSE 10 machine that I run at home (NC mirror). That worked fine.
 
 Chuck
 
 -Original Message-
 From: [EMAIL PROTECTED] on behalf of Peter Dalgaard
 Sent: Thu 1/12/2006 4:56 PM
 
 Which CRAN mirror? The Statlib one has been acting up lately.
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide!
 http://www.R-project.org/posting-guide.html
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] hypothesis testing for rank-deficient linear models

2006-01-12 Thread dhinds
[EMAIL PROTECTED] wrote:
 Take the following example:

 a - rnorm(100)
 b - trunc(3*runif(100))
 g - factor(trunc(4*runif(100)),labels=c('A','B','C','D'))
 y - rnorm(100) + a + (b+1) * (unclass(g)+2)
...

Here's a cleaned-up function to compute estimable within-group effects
for rank deficient models.  For the above data, it could be invoked
as:

 m - lm(y~a+b*g, subset=(b==0 | g!='B'))
 subgroup.effects(m, 'b',  g=c('A','B','C','D'))
  g Estimate  StdError  t.value  p.value
1 A 2.779167 0.4190213  6.63252 4.722978e-09
2 B   NANA   NA   NA
3 C 4.572431 0.3074402 14.87258 6.226445e-24
4 D 5.920809 0.3502251 16.90572 3.995266e-27

Again, I'm not sure whether this is a good approach, or whether there
is an easier way using existing R functions.  One problem is figuring
exactly which terms are not estimable from the available data.  My
hack using alias() is not satisfactory and I've already run into cases
where it fails.  But I'm having trouble coming up with a more general,
correct test?

-- David Hinds



subgroup.effects - function(model, term, ...)
{
my.coef - function(n)
{
contr - lapply(names(args), function(i)
contr.treatment(args[[i]], unclass(gr[n,i])))
names(contr) - names(args)
u - update(model, formula=model$formula,
data=model$data, contrasts=contr)
uc - coef(summary(u))[term,]
if (any(is.na(coef(u))) 
any(!is.na(alias(u)$Complete)))
uc[1:4] - NA
uc
}
args - list(...)
gr - expand.grid(...)
d - data.frame(gr, t(sapply(1:nrow(gr), my.coef)))
names(d) - c(names(gr),'Estimate','StdError','t.value','p.value')
d
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Basis of fisher.test

2006-01-12 Thread Prof Brian Ripley
On Thu, 12 Jan 2006 [EMAIL PROTECTED] wrote:

 I want to ascertain the basis of the table ranking,
 i.e. the meaning of extreme, in Fisher's Exact Test
 as implemented in 'fisher.test', when applied to RxC
 tables which are larger than 2x2.

 One can summarise a strategy for the test as

 1) For each table compatible with the margins
   of the observed table, compute the probability
   of this table conditional on the marginal totals.

 2) Rank the possible tables in order of a measure
   of discrepancy between the table and the null
   hypothesis of no association.

 3) Locate the observed table, and compute the sum
   of the probabilties, computed in (1), for this
   table and more extreme tables in the sense of
   the ranking in (2).

 The question is: what measure of discrepancy is
 used in 'fisher.test' corresponding to stage (2)?

 (There are in principle several possibilities, e.g.
 value of a Pearson chi-squared, large values being
 discrepant; the probability calculated in (2),
 small values being discrepant; ... )

 ?fisher.test says only:

[That following is not a quote from a current version of R.]

 In the one-sided 2 by 2 cases, p-values are obtained
 directly using the hypergeometric distribution.
 Otherwise, computations are based on a C version of
 the FORTRAN subroutine FEXACT which implements the
 network developed by Mehta and Patel (1986) and
 improved by Clarkson, Fan  Joe (1993). The FORTRAN
 code can be obtained from
 URL: http://www.netlib.org/toms/643.

No, it *also* says

  Two-sided tests are based on the probabilities of the tables, and
  take as 'more extreme' all tables with probabilities less than or
  equal to that of the observed table, the p-value being the sum of
  such probabilities.

which answers the question (there are only two-sided tests for such 
tables).

Now, what does the posting guide say about stating the R version and 
updating before posting?

-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] edit.data.frame

2006-01-12 Thread Prof Brian Ripley
On Thu, 12 Jan 2006, Fredrik Lundgren wrote:

 Sometimes I have huge data.frames and the small spreadsheetlike
 edit.data.frame is quite handy to get an overview of the data. However,
 when I close the editor all data are rolled over the console window,
 which takes time and clutters the window. Is there a way to avoid this?

If you mean printed to the R terminal/console, assign the result or use 
invisible(edit(object)).


-- 
Brian D. Ripley,  [EMAIL PROTECTED]
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] edit.data.frame

2006-01-12 Thread ronggui
I think fix(data.frame.name) is the best way .

2006/1/13, Prof Brian Ripley [EMAIL PROTECTED]:
 On Thu, 12 Jan 2006, Fredrik Lundgren wrote:

  Sometimes I have huge data.frames and the small spreadsheetlike
  edit.data.frame is quite handy to get an overview of the data. However,
  when I close the editor all data are rolled over the console window,
  which takes time and clutters the window. Is there a way to avoid this?

 If you mean printed to the R terminal/console, assign the result or use
 invisible(edit(object)).


 --
 Brian D. Ripley,  [EMAIL PROTECTED]
 Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
 University of Oxford, Tel:  +44 1865 272861 (self)
 1 South Parks Road, +44 1865 272866 (PA)
 Oxford OX1 3TG, UKFax:  +44 1865 272595

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html



--
黄荣贵
Deparment of Sociology
Fudan University

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Taking code from packages

2006-01-12 Thread Ales Ziberna
Hello!

I am currently in the process of creating (my first) package, which (when
ready) I intend to publish to CRAN. In the process of creating this package
I have taken some code form existing packages. I have actually copied parts
of functions in to new functions. This code is usually something very basic
such as Rand index. What is the proper procedure for this?

Since most of R (and also the packages I have taken code form) is published
under GPL, I think this should be OK. However I do not know if:
1.  I should still ask authors of the packages for permission or at
least notify them.
2.  Ad references to the functions (and packages) from which I had taken
the code or only to the references they use.

What about regarding code that was sent to the list, usually as a response
to one of my problems. I assume that in this case it is best to consult the
author?

Any comments and opinions are very welcomed!

Best regards,
Ales Ziberna

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html