[R] Using odfWeave to Produce Tables with Right-aligned Cells

2007-08-24 Thread Rick Bilonick
I cannot figure out how to use odfWeave to produce tables with
right-aligned columns. None of the examples show this and I'm completely
confused on how to achieve this. Could someone share a simple example?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using odfWeave to Produce Tables with Right-aligned Cells

2007-08-24 Thread Rick Bilonick
I cannot figure out how to use odfWeave to produce tables eith
right-aligned columns. None of the examples show this and I'm completely
confused on how to achieve this. Could someone share a simple example?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] odfWeave - How to Insert eps rather than png

2007-07-18 Thread Rick Bilonick
If you create plots and use odfWeave to create an odf text document, the
default for figures is png bitmap graphics. The only way I've found to
insert eps graphics is to first create the eps graphic using the
postscript driver and then use odfInsertPlot. I've tried to use
getImageDefs and setImageDefs but I get an empty plot. Could someone
show an example?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Embedding Fonts in eps Files - Required by Publisher

2007-04-13 Thread Rick Bilonick
The publisher requires embedded fonts and the eps files that R produces
don't pass its tests. How can I force the fonts (just default fonts) to
be embedded. embedFonts seems to only embed unusual fonts, not the
default ones that are used in a simple eps generated by using
"postscript".

I've tried:

ps2pdf14 -dPDFSETTINGS=/prepress bland-altman_v2.eps bland-altman_v2.pdf
pdftops -eps bland-alman_v2.pdf bland-altman_v2_embed.eps

but the box size is wrong and I don't know how to correct it - if I
correct the bounding box in bland-altman_v2_embed.eps, the size is
correct but not all of the figure shows. The reference I found mentions
fixing the "box size" but doesn't say how this is done.

I would appreciate any insight on this.

Rick B.

__
[EMAIL PROTECTED] mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing Package rgl - Compilation Fails - Summary

2007-02-20 Thread Rick Bilonick
Summarizing:

I'm running R 2.4.1 on a current FC6 32-bit system. In order to have the
rgl R package install, I needed to install both mesa-libGLU-devel (FC6
version is 6.5.1-9) and libXext-devel (FC6) rpm packages. Thanks to
everyone who commented.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing Package rgl - Compilation Fails

2007-02-20 Thread Rick Bilonick
On Mon, 2007-02-19 at 15:11 -0600, Ranjan Maitra wrote:
> The error is different now. It now cannot find Xext library. Do a yum search 
> on this and install that.
> 
> yum provides libXext
> 
> which will give you the package Xext which needs to be installed.
> 
> You may also need to install the libXext-devel.i386 package.
> 
> HTH.
> Ranjan
> 
> 

Thanks. Installing libXext-devel allowed rgl to install.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Installing Package rgl - Compilation Fails

2007-02-19 Thread Rick Bilonick
On Mon, 2007-02-19 at 19:56 +, Oleg Sklyar wrote:
> Check again your error message:
> 
> opengl.hpp:24:20: error: GL/glu.h: No such file or directory
> 
> you need to install
> 
> mesa-libGLU-devel FC6 version is 6.5.1-7
> 
> which will provide development files for glut3. Needless to say the 
> above will probably pool some dependencies and (-devel) means it will 
> install *.h files as well. Start your FC package manager and search for 
> "GLU", install the above and try again.
> 
> Best,
>   Oleg
> 

I installed a slightly newer version (the one that yum found):

mesa-libGLU-devel-6.5.1-9.fc6

but it still fails (I'm installing as root):

> install.packages("rgl")
--- Please select a CRAN mirror for use in this session ---
Loading Tcl/Tk interface ... done
trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/rgl_0.70.tar.gz'
Content type 'application/x-gzip' length 705556 bytes
opened URL


Deleted a bunch of lines



Disposable.hpp:13: warning: ‘struct IDisposeListener’ has virtual
functions but non-virtual destructor
gui.hpp:56: warning: ‘class gui::WindowImpl’ has virtual functions but
non-virtual destructor
gui.hpp:90: warning: ‘class gui::GUIFactory’ has virtual functions but
non-virtual destructor
g++ -shared -L/usr/local/lib -o rgl.so api.o Background.o BBoxDeco.o
Color.o device.o devicemanager.o Disposable.o FaceSet.o fps.o geom.o
gl2ps.o glgui.o gui.o Light.o LineSet.o LineStripSet.o Material.o math.o
osxgui.o osxlib.o par3d.o pixmap.o PointSet.o PrimitiveSet.o QuadSet.o
RenderContext.o render.o rglview.o scene.o select.o Shape.o SphereMesh.o
SphereSet.o SpriteSet.o String.o Surface.o TextSet.o Texture.o
TriangleSet.o Viewpoint.o win32gui.o win32lib.o x11gui.o x11lib.o -L
-lX11 -lXext -lGL -lGLU -L/usr/lib -lpng12  -L/usr/lib/R/lib -lR
/usr/bin/ld: cannot find -lXext
collect2: ld returned 1 exit status
make: *** [rgl.so] Error 1
chmod: cannot access `/usr/lib/R/library/rgl/libs/*': No such file or
directory
ERROR: compilation failed for package 'rgl'
** Removing '/usr/lib/R/library/rgl'

The downloaded packages are in
/tmp/RtmpMc94yC/downloaded_packages
Warning message:
installation of package 'rgl' had non-zero exit status in:
install.packages("rgl") 


Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Installing Package rgl - Compilation Fails

2007-02-19 Thread Rick Bilonick
I'm running  R 2.4.1  (with the latest versions of all packages) on an
FC6 32-bit system. When I try to install the rgl package, compilation
fails:

> install.packages("rgl")
--- Please select a CRAN mirror for use in this session ---
Loading Tcl/Tk interface ... done
trying URL 'http://lib.stat.cmu.edu/R/CRAN/src/contrib/rgl_0.70.tar.gz'
Content type 'application/x-gzip' length 705556 bytes
opened URL
==
downloaded 689Kb

* Installing *source* package 'rgl' ...
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables... 
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking how to run the C preprocessor... gcc -E
checking for X... libraries , headers 
checking for libpng-config... yes
configure: using libpng-config
configure: using libpng dynamic linkage
configure: creating ./config.status
config.status: creating src/Makevars
** libs
g++ -I/usr/lib/R/include -I/usr/lib/R/include -I -DHAVE_PNG_H
-I/usr/include/libpng12 -Iext -I/usr/local/include-fpic  -O2 -g
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector
--param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic
-fasynchronous-unwind-tables -c api.cpp -o api.o
In file included from glgui.hpp:9,
 from gui.hpp:11,
 from rglview.h:10,
 from Device.hpp:11,
 from DeviceManager.hpp:9,
 from api.cpp:14:
opengl.hpp:24:20: error: GL/glu.h: No such file or directory
Disposable.hpp:13: warning: ‘struct IDisposeListener’ has virtual
functions but non-virtual destructor
types.h:77: warning: ‘class DestroyHandler’ has virtual functions but
non-virtual destructor
gui.hpp:56: warning: ‘class gui::WindowImpl’ has virtual functions but
non-virtual destructor
gui.hpp:90: warning: ‘class gui::GUIFactory’ has virtual functions but
non-virtual destructor
pixmap.h:39: warning: ‘class PixmapFormat’ has virtual functions but
non-virtual destructor
api.cpp: In function ‘void rgl_user2window(int*, int*, double*, double*,
double*, double*, int*)’:
api.cpp:764: error: ‘gluProject’ was not declared in this scope
api.cpp: In function ‘void rgl_window2user(int*, int*, double*, double*,
double*, double*, int*)’:
api.cpp:792: error: ‘gluUnProject’ was not declared in this scope
make: *** [api.o] Error 1
chmod: cannot access `/usr/lib/R/library/rgl/libs/*': No such file or
directory
ERROR: compilation failed for package 'rgl'
** Removing '/usr/lib/R/library/rgl'

The downloaded packages are in
/tmp/RtmpJY8uNp/downloaded_packages
Warning message:
installation of package 'rgl' had non-zero exit status in:
install.packages("rgl") 

I was able to install this on an 64-bit system running FC4 and R 2.4.1.

Any ideas on why it fails on FC6?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Autocorrelated Binomial

2007-02-01 Thread Rick Bilonick
I need to generate autocorrelated binary data. I've found references to
the IEKS package but none of the web pages currently exist. Does anyone
know where I can find this package or suggest another package?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Obtaining Estimates of the Random Effects Parameters

2006-12-13 Thread Rick Bilonick
I'm running simulation using lme and sometimes the estimated
variance-covariance matrix is not positive definite so that the
intervals function won't work for the random effect coefficients. I've
tried varcomp from the ape package but this does not return all the
coefficients. How can I extract the random effect coefficients without
using intervals?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Skipping Results with Errors in a Loop

2006-12-13 Thread Rick Bilonick
I'm estimating models using lme in a for loop. Sometimes the model
doesn't converge or there are other problems. This causes the evaluation
to stop prematurely. I can't remember the function name that I need to
use to allow the loop to continue until the end. Could someone remind me
the name of the function? I've tried searching but haven't hit upon the
function.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] groupedData Error Using outer=TRUE

2006-12-07 Thread Rick Bilonick
I'm using groupedData from nlme. I set up a groupedData data.frame with
outer=~group1. When I try to plot with outer=TRUE, I get "subscript out
of bounds." This happens most of the time. When it works, I get
spaghetti-type plots for comparing groups. But I don't understand why it
doesn't usually work.

> longa.mod.1.gd <- groupedData(mod1.logit~time|
name/eye,outer=~group1,data=longa.mod.1)
> plot(longa.mod.1.gd)
> plot(longa.mod.1.gd,outer=TRUE)
Error in attribs[["outer"]][[displayLevel]] :
subscript out of bounds

What am I doing wrong?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Variance Functions in lme

2006-11-09 Thread Rick Bilonick
Using the weight argument with a variance function in lme (nlme), you
can allow for heteroscedasticity of the within-group error. Is there a
way to do this for the other variance components? For example, suppose
you had subjects, days nested within subjects, and visits nested within
days within subjects (a fully nested two-way design) and you had, say
men and women subjects. Could you allow for differences in the variance
components for men and women?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Variance Component/ICC Confidence Intervals via Bootstrap or Jackknife

2006-10-29 Thread Rick Bilonick
On Sun, 2006-10-29 at 11:06 -0800, Spencer Graves wrote:
>   I can think of two ways to get confidence intervals on intraclass 
> correlations (ICCs) and more accurate intervals for variance 
> components:  (1) modifying 'simulate.lme' to store the estimated 
> variance components as well as "logLik" and (2) using 'lmer' and 
> 'mcmcsamp' in library(lme4). 
> 
>   The difficulty with (1) is that you have to make a local copy of 
> 'simulate.lme', then figure out where and how to modify it.  I've just 
> looked at the code, and it looks like the required modifications should 
> be fairly straightforward.  The problem with (2) is that you would have 
> to learn the syntax for a nested model for 'lmer'.  It's different from 
> that for 'lme' but not difficult.  The 'lmer' function is newer and 
> better in many ways but is not as well documented and does not have as 
> many helper functions yet.  The best documentation available for it may 
> be the 'MlmSoftRev' vignette in the 'mlmRev' package plus the "R News 
> 5/1" article from May 2005.  If you are not familiar with vignettes, 
> RSiteSearch("graves vignette") produced 74 hits for me just now.  Find 
> 'vignette' in the first hit led me to an earlier description of vignettes. 
> 
>   If it were my problem, I'd probably try the second, though I might 
> try both and compare.  If you try them both, I'd be interested in the 
> comparison.  If the answers were substantially different, I'd worry. 
> 
>   Hope this helps. 
>   Spencer Graves
> 

I'm familiar with making nested models in lmer but I'm not familiar with
mcmcsamp but will check it out. Thanks.

I used nlme/lme because it has the intervals function while (at least
the last time I checked), lme4/lmer did not.

The way I've done the bootstrapping (sampling at each level) sounds the
same as using a simulation. But articles and references I've found
indicate that only the highest level (a if c is nested in b and b is
nested a) should be sampled.

Rick B.



-- 
Richard A. Bilonick, Ph.D.
Assistant Professor
University of Pittsburgh School of Medicine
Department of Ophthalmology
412 648 9138
BST S 207

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Variance Component/ICC Confidence Intervals via Bootstrap or Jackknife

2006-10-24 Thread Rick Bilonick
I'm using the lme function in nmle to estimate the variance components
of a fully nested two-level model:

Y_ijk = mu + a_i + b_j(i) + e_k(j(i))

lme computes estimates of the variances for a, b, and e, call them v_a,
v_b, and v_e, and I can use the intervals function to get confidence
intervals. My understanding is that these intervals are probably not
that robust plus I need intervals on the intraclass correlation
coefficients:

v_a/(v_a + v_b + v_e)

and

(v_a + v_b)/(v_a + v_b + v_e).

I would like to use a bootstrap or a jackknife estimate of these
confidence intervals. Is there any package available?

I've tried to write the R code myself but I'm not sure if I understand
exactly how to do it. The way I've tried to do a bootstrap estimate is
to sample with replacement for i units, then sample with replacement the
j(i) units, and then finally sample with replacement the k(j(i)) units.

But the few references I've been able to track down (Arvesen, Biometrcs,
1970 is one), seem to say that I should just sample with replacement the
i units. Plus they seem to indicate that a log transform is needed. The
Arvesen reference used something like using log(v_a/v_e) as an estimator
for sigma_a^2/sigma_e^2 and getting an interval and then transforming to
get to an interval for the ICC (although it's not clear to me how to get
the other ICC in a two-level nested design).

Any insights would be appreciated.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with geeglm

2006-08-25 Thread Rick Bilonick
event.nab.2 is 0/1 and I dichotomized va to get va.2 to see if I could
get geeglm to work. glm has no problem with the data but geeglm chokes.
Each subject (patient.id) has at most 2 observations and more than 3/4
of the subjects have 2 observations. I have even worse problems trying
to use glmmPQL from MASS and worse still trying to use lmer from lme4.
But I figured a marginal model would work. (geeglm seems to work OK with
most of the explanatory factors in the data set I have but a couple of
them show similar problems.)

> summary(glm(event.nab.2~va.2,family=binomial(link="logit"),data=test))

Call:
glm(formula = event.nab.2 ~ va.2, family = binomial(link = "logit"),
data = test)

Deviance Residuals:
Min   1Q   Median   3Q  Max
-0.3787  -0.3787  -0.2246  -0.2246   2.7177

Coefficients:
  Estimate Std. Error z value Pr(>|z|)
(Intercept)-2.5993 0.1804  -14.41  < 2e-16 ***
va.2(84, Inf]  -1.0685 0.3435   -3.11  0.00187 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 363.19  on 958  degrees of freedom
Residual deviance: 352.28  on 957  degrees of freedom
AIC: 356.28

Number of Fisher Scoring iterations: 6


summary(geeglm(event.nab.2~va.2,family=binomial(link="logit"),id=patient.id,cor="exch",data=test))
Error in geese.fit(xx, yy, id, offset, soffset, w, waves = waves,
zsca,  :
nrow(zsca) and length(y) not match


> head(test)
  patient.id event.nab.2  va.2  
1  1   0 (-Inf,84] 
2  1   0 (-Inf,84] 
3  2   0 (84, Inf] 
4  2   0 (84, Inf] 
5  3   0 (84, Inf] 
6  3   0 (84, Inf] 

I'm using R 2.3.1 and the latest version of geepack. I get a similar
error message if I use va which is continuous.

I don't know what the error message from geeglm/geese means.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Marginal Predicitions from nlme and lme4

2006-08-22 Thread Rick Bilonick
On Wed, 2006-08-23 at 06:43 +1000, Andrew Robinson wrote:
> Rick,
> 
> if by marginal prediction, you mean the prediction without random
> effects, then use the "level" argument.  See ?predict.lme or ?fitted.lme
> 
> If not then I don't know :)
> 
> Cheers
> 
> Andrew
Thanks. I'm familiar with level in predict and fitted for lme. These
allow you to select the fixed effects and/or the random effects. The
marginal prediction integrates out the random effects and is what a GEE
marginal model produces. From what I've read, the marginal effects seem
to be less desirable than the fixed effects from an lme or a generalized
lme. But I would still like to compute them for comparison.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Marginal Predicitions from nlme and lme4

2006-08-22 Thread Rick Bilonick
Is there a way (simple or not) to get the marginal prediction from lme
(in nlme) and/or lmer (in lme4)?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Retrieving p-values and z values from lmer output

2006-08-21 Thread Rick Bilonick
On Mon, 2006-08-21 at 16:52 -0400, Rick Bilonick wrote:
> I can't find a way to retrieve z values and p-values from the output
> from lmer in the lme4 package. How is this done?
> 
> Rick B.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

I was able to do it this way:

> slotNames(summary(fit.lmer))
 [1] "isG"   "methTitle" "logLik""ngrps" "sigma" "coefs"
 [7] "vcov"  "REmat" "AICtab""flist" "Zt""X"
[13] "y" "wts"   "wrkres""method""useScale"
"family"
[19] "call"  "cnames""nc""Gp""XtX"   "ZtZ"
[25] "ZtX"   "Zty"   "Xty"   "Omega" "L" "RZX"
[31] "RXX"   "rZy"   "rXy"   "devComp"   "deviance"  "fixef"
[37] "ranef" "RZXinv""bVar"  "gradComp"  "status"

> slot(summary(fit.lmer),"coefs")
   Estimate Std. Errorz value Pr(>|z|)
(Intercept) -0.02626325 0.03114850 -0.8431628 3.991374e-01
timeAfter0.42322569 0.02870844 14.7422023 3.453370e-49

> slot(summary(fit.lmer),"coefs")[,4]
 (Intercept)timeAfter
3.991374e-01 3.453370e-49

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Retrieving p-values and z values from lmer output

2006-08-21 Thread Rick Bilonick
I can't find a way to retrieve z values and p-values from the output
from lmer in the lme4 package. How is this done?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Specifying Path Model in SEM for CFA

2006-08-17 Thread Rick Bilonick
sem() handles only equality constraints among parameters, and this model
> requires linear inequality constraints. 
> 
> I'm aware of SEM software that handles inequality constraints, but I'm not
> aware of anything in R that will do it "out of the box." One possibility is
> to write out the likelihood (or "fitting function") for your model and
> perform a bounded optimization using optim(). It would probably be a fair
> amount of work setting up the problem.
> 
> Finally, there are tricks that permit the imposition of general constraints
> and inequality constraints using software, like sem(), that handles only
> equality constraints. It's probably possible to do what you want using such
> a trick, but it would be awkward. See the references given in Bollen,
> Structural Equations with Latent Variables (Wiley, 1989), pp. 401-403.
> 
> I'm sorry that I can't be of more direct help.
>  John


Thanks. I'll explore the options you mention. I would like to use R
because I need to couple this with block bootstrapping to handle time
dependencies.

Rick

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Specifying Path Model in SEM for CFA

2006-08-17 Thread Rick Bilonick
On Wed, 2006-08-16 at 17:01 -0400, John Fox wrote:
> Dear Rick,
> 
> It's unclear to me what you mean by constraining "each column of the factor
> matrix to sum to one." If you intend to constrain the loadings on each
> factor to sum to one, sem() won't do that, since it supports only equality
> constraints, not general linear constraints on parameters of the model, but
> why such a constraint would be reasonable in the first place escapes me.
> More common in confirmatory factor analysis would be to constrain more of
> the loadings to zero. Of course, one would do this only if it made
> substantive sense in the context of the research.
> 
> Regards,
>  John
> 
> 
> John Fox
> Department of Sociology
> McMaster University
> Hamilton, Ontario
> Canada L8S 4M4
> 905-525-9140x23604
> http://socserv.mcmaster.ca/jfox 
>  

I'm trying to build a multivariate receptor model as described by
Christensen and Sain (Technometrics, vol 44 (4) pp. 328-337). The model
is

x_t = Af_t + e_t

where A is the matrix of nonnegative source compositions, x_t are the
observed pollutant concentrations at time t, and f_t are the unobserved
factors. The columns of A are supposed to sum to no more than 100%. They
say they are using a latent variable model. If sem can't handle this, do
you know of another R package that could?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Specifying Path Model in SEM for CFA

2006-08-16 Thread Rick Bilonick
On Wed, 2006-08-16 at 08:47 -0400, John Fox wrote:
> Dear Rick,
> 
> There are a couple of problems here:
> 
> (1) You've fixed the error variance parameters for each of the observed
> variables to 1 rather than defining each as a free parameter to estimate.
> For example, use 
> 
> X1 <-> X1, theta1, NA
> 
> Rather than 
> 
> X1 <-> X1, NA, 1
> 
> The general principle is that if you give a parameter a name, it's a free
> parameter to be estimated; if you give the name as NA, then the parameter is
> given a fixed value (here, 1). (There is some more information on this and
> on error-variance parameters in ?sem.)
> 
> (2) I believe that the model you're trying to specify -- in which all
> variables but X6 load on F1, and all variables but X1 load on F2 -- is
> underidentified.
> 
> In addition, you've set the metric of the factors by fixing one loading to
> 0.20 and another to 0.25. That should work but strikes me as unusual, and
> makes me wonder whether this was what you really intended. It would be more
> common in a CFA to fix the variance of each factor to 1, and let the factor
> loadings be free parameters. Then the factor covariance would be their
> correlation. 
> 
> You should not have to specify start values for free parameters (such as
> g11, g22, and g12 in your model), though it is not wrong to do so. I would
> not, however, specify start values that imply a singular covariance matrix
> among the factors, as you've done; I'm surprised that the program was able
> to get by the start values to produce a solution.
> 
> BTW, the Thurstone example in ?sem is for a confirmatory factor analysis
> (albeit a slightly more complicated one with a second-order factor). There's
> also an example of a one-factor CFA in the paper at
> , though this
> is for ordinal observed variables.
> 
> I hope this helps,
>  John
> 
> 
> John Fox
> Department of Sociology
> McMaster University
> Hamilton, Ontario
> Canada L8S 4M4
> 905-525-9140x23604
> http://socserv.mcmaster.ca/jfox 
>  

Thanks for the information. I think I understand how to handle the
residual variance after reading the sem help file more carefully. Now I
have to figure out how to constrain each column of the factor matrix to
sum to one. Maybe this will fix the problem with being under-identified.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Specifying Path Model in SEM for CFA

2006-08-15 Thread Rick Bilonick
I'm using specify.model for the sem package. I can't figure out how to
represent the residual errors for the observed variables for a CFA
model. (Once I get this working I need to add some further constraints.)

Here is what I've tried:

model.sa <- specify.model()
  F1 -> X1,l11, NA
  F1 -> X2,l21, NA
  F1 -> X3,l31, NA
  F1 -> X4,l41, NA
  F1 -> X5, NA, 0.20
  F2 -> X1,l12, NA
  F2 -> X2,l22, NA
  F2 -> X3,l32, NA
  F2 -> X4,l42, NA
  F2 -> X6, NA, 0.25
  F1<-> F2,g12, 1
  F1<-> F1,g11, 1
  F2<-> F2,g22, 1
  X1<-> X1, NA, 1
  X2<-> X2, NA, 1
  X3<-> X3, NA, 1
  X4<-> X4, NA, 1
  X5<-> X5, NA, 1
  X6<-> X6, NA, 1

This at least converges:

> summary(fit.sem)

 Model Chisquare =  2147   Df =  10 Pr(>Chisq) = 0
 Chisquare (null model) =  2934   Df =  15
 Goodness-of-fit index =  0.4822
 Adjusted goodness-of-fit index =  -0.087387
 RMSEA index =  0.66107   90 % CI: (NA, NA)
 Bentler-Bonnett NFI =  0.26823
 Tucker-Lewis NNFI =  -0.098156
 Bentler CFI =  0.26790
 BIC =  2085.1

 Normalized Residuals
   Min. 1st Qu.  MedianMean 3rd Qu.Max.
 -5.990  -0.618   0.192   0.165   1.700   3.950

 Parameter Estimates
Estimate  Std Error z value  Pr(>|z|)
l11 -0.245981 0.21863   -1.12510 0.26054748 X1 <--- F1
l21 -0.308249 0.22573   -1.36555 0.17207875 X2 <--- F1
l31  0.202590 0.079102.56118 0.01043175 X3 <--- F1
l41 -0.235156 0.21980   -1.06985 0.28468885 X4 <--- F1
l12  0.839985 0.219623.82476 0.00013090 X1 <--- F2
l22  0.828460 0.225483.67418 0.00023862 X2 <--- F2
l32  0.066722 0.083690.79725 0.42530606 X3 <--- F2
l42  0.832037 0.218403.80963 0.00013917 X4 <--- F2
g12  0.936719 0.643311.45609 0.14536647 F2 <--> F1
g11  2.567669 1.256082.04418 0.04093528 F1 <--> F1
g22  1.208497 0.550402.19567 0.02811527 F2 <--> F2

 Iterations =  59

And it produces the following path diagram:

> path.diagram(fit.sem)
digraph "fit.sem" {
  rankdir=LR;
  size="8,8";
  node [fontname="Helvetica" fontsize=14 shape=box];
  edge [fontname="Helvetica" fontsize=10];
  center=1;
  "F2" [shape=ellipse]
  "F1" [shape=ellipse]
  "F1" -> "X1" [label="l11"];
  "F1" -> "X2" [label="l21"];
  "F1" -> "X3" [label="l31"];
  "F1" -> "X4" [label="l41"];
  "F1" -> "X5" [label=""];
  "F2" -> "X1" [label="l12"];
  "F2" -> "X2" [label="l22"];
  "F2" -> "X3" [label="l32"];
  "F2" -> "X4" [label="l42"];
  "F2" -> "X6" [label=""];
}

But I don't see the residual error terms that go into each of the
observed variables X1 - X6. I've tried:

model.sa <- specify.model()
  E1 -> X1, e1,  1
  E2 -> X2, e2,  1
  E3 -> X3, e3,  1
  E4 -> X4, e4,  1
  E5 -> X5, e5,  1
  E6 -> X6, e6,  1
  E1<-> E1, s1, NA
  E2<-> E2, s2, NA
  E3<-> E3, s3, NA
  E4<-> E4, s4, NA
  E5<-> E5, s5, NA
  E6<-> E6, s6, NA
  F1 -> X1,l11, NA
  F1 -> X2,l21, NA
  F1 -> X3,l31, NA
  F1 -> X4,l41, NA
  F1 -> X5, NA,  1
  F2 -> X1,l12, NA
  F2 -> X2,l22, NA
  F2 -> X3,l32, NA
  F2 -> X4,l42, NA
  F2 -> X6, NA,  1
  F1<-> F2, NA, 1
  F1<-> F1, NA, 1
  F2<-> F2,g22, NA
  X1<-> X1, NA, 1
  X2<-> X2, NA, 1
  X3<-> X3, NA, 1
  X4<-> X4, NA, 1
  X5<-> X5, NA, 1
  X6<-> X6, NA, 1

I'm trying to use E1 - E6 as the residual error terms. But I get warning
messages about no variances for X1-X6 and it doesn't converge. Also, the
associated path diagram:

digraph "fit.sem" {
  rankdir=LR;
  size="8,8";
  node [fontname="Helvetica" fontsize=14 shape=box];
  edge [fontname="Helvetica" fontsize=10];
  center=1;
  "E1" [shape=ellipse]
  "E2" [shape=ellipse]
  "E3" [shape=ellipse]
  "E4" [shape=ellipse]
  "E5" [shape=ellipse]
  "E6" [shape=ellipse]
  "F2" [shape=ellipse]
  "F1" [shape=ellipse]
  "E1" -> "X1" [label=""];
  "E2" -> "X2" [label=""];
  "E3" -> "X3" [label=""];
  "E4" -> "X4" [label=""];
  "E5" -> "X5" [label=""];
  "E6" -> "X6" [label=""];
  "F1" -> "X1" [label="l11"];
  "F1" -> "X2" [label="l21"];
  "F1" -> "X3" [label="l31"];
  "F1" -> "X4" [label="l41"];
  "F1" -> "X5" [label=""];
  "F2" -> "X1" [label="l12"];
  "F2" -> "X2" [label="l22"];
  "F2" -> "X3" [label="l32"];
  "F2" -> "X4" [label="l42"];
  "F2" -> "X6" [label=""];
}

Has ellipses around the E1-E6 which I believe indicates they are latent
factors and not residual errors.

If anyone could point in the right direction I would appreciate it.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Linear Trend in Residiuals From lme

2006-08-09 Thread Rick Bilonick
On Wed, 2006-08-09 at 15:04 -0500, Douglas Bates wrote:
> On 8/9/06, Rick Bilonick <[EMAIL PROTECTED]> wrote:
> > I'm fitting a mixed effects model:
> >
> > fit.1 <- lme(y~x,random=~1|id,data=df)
> >
> > There are two different observations for each id for both x and y. When
> > I use plot(fit.1), there is a strong increasing linear trend in the
> > residuals versus the fitted values (with no outliers). This also happens
> > if I use random=~x|id. Am I specifying something incorrectly?
> 
> Could you provide a reproducible example please?
> 
> I suspect that the problem comes from having only two observations per
> level of id.  When you have very few observations per group the roles
> of the random effect and the per-observation noise term in explaining
> the variation become confounded.  However, I can't check if this is
> the case without looking at some data and model fits.

I tried using geeglm from geepack to fit a marginal model. I understand
this is not the same as a mixed effects model but the residuals don't
have the linear trend. Should I avoid using lme in this case?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Linear Trend in Residiuals From lme

2006-08-09 Thread Rick Bilonick
On Wed, 2006-08-09 at 15:04 -0500, Douglas Bates wrote:
> On 8/9/06, Rick Bilonick <[EMAIL PROTECTED]> wrote:
> > I'm fitting a mixed effects model:
> >
> > fit.1 <- lme(y~x,random=~1|id,data=df)
> >
> > There are two different observations for each id for both x and y. When
> > I use plot(fit.1), there is a strong increasing linear trend in the
> > residuals versus the fitted values (with no outliers). This also happens
> > if I use random=~x|id. Am I specifying something incorrectly?
> 
> Could you provide a reproducible example please?
> 
> I suspect that the problem comes from having only two observations per
> level of id.  When you have very few observations per group the roles
> of the random effect and the per-observation noise term in explaining
> the variation become confounded.  However, I can't check if this is
> the case without looking at some data and model fits.

Unfortunately, I can't send the actual data. I did make a simple
intercepts-only example with two observations per group but it does not
exhibit the linear trend.

library(nlme)

x <- rnorm(20,5,1)
id <- factor(rep(1:20,each=2))

y <- as.vector(sapply(x,rnorm,n=2,sd=0.2))

df <- data.frame(id,y)
df.gd <- groupedData(y~x|id,data=df)

summary(lme.1 <- lme(y~1,random=~1|id,data=df.gd))
plot(lme.1)

If I fit an intercepts-only model to the actual data, I still see the
trend in the residuals.

What other analysis would you suggest?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Linear Trend in Residiuals From lme

2006-08-09 Thread Rick Bilonick
I'm fitting a mixed effects model:

fit.1 <- lme(y~x,random=~1|id,data=df)

There are two different observations for each id for both x and y. When
I use plot(fit.1), there is a strong increasing linear trend in the
residuals versus the fitted values (with no outliers). This also happens
if I use random=~x|id. Am I specifying something incorrectly?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] oodraw command line usage

2006-08-07 Thread Rick Bilonick
Please ignore this - it was sent to the wrong list by mistake.

Rick B.

On Mon, 2006-08-07 at 23:23 -0400, Rick Bilonick wrote:
> I use R to create .eps graphics and then use oodraw to convert them
> to .emf versions. (One reason I do this is that OOo tends to
> re-size .eps files and I haven't found a way to stop it or change it
> once the graph is resized. .emf files are not distorted by OOo -
> fortunately.) I use a command like:
> 
> > oodraw filename.eps &
> 
> The gui opens and then I select .emf and do an export. I haven't found a
> way to eliminate the gui and do everything from the command line. Is
> this possible? I need to convert a large number of files and it would be
> convenient to do it all from the command line. (I also note that I
> sometimes get warning messages that OOo can't create .emf files - but
> there is never a problem creating the .emf files.) I'm using the latest
> version of OOo. I've looked at the arguments for oodraw and I don't see
> any way to specify that I want a .emf file as an output.
> 
> Rick B.
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
-- 
Assistant Professor
University of Pittsburgh School of Medicine
Department of Ophthalmology
412 648 9138
BST S 207

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] oodraw command line usage

2006-08-07 Thread Rick Bilonick
I use R to create .eps graphics and then use oodraw to convert them
to .emf versions. (One reason I do this is that OOo tends to
re-size .eps files and I haven't found a way to stop it or change it
once the graph is resized. .emf files are not distorted by OOo -
fortunately.) I use a command like:

> oodraw filename.eps &

The gui opens and then I select .emf and do an export. I haven't found a
way to eliminate the gui and do everything from the command line. Is
this possible? I need to convert a large number of files and it would be
convenient to do it all from the command line. (I also note that I
sometimes get warning messages that OOo can't create .emf files - but
there is never a problem creating the .emf files.) I'm using the latest
version of OOo. I've looked at the arguments for oodraw and I don't see
any way to specify that I want a .emf file as an output.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] prettyR arrives

2006-08-04 Thread Rick Bilonick
On Fri, 2006-08-04 at 19:37 -0400, Jim Lemon wrote:
> Hi all,
> 
> I have finally gotten the prettyR package going (many thanks to Kurt 
> Hornik for his patience).
> 
> prettyR is a set of functions that allows the user to produce HTML 
> output from R scripts. Given an R script that runs properly, an HTML 
> listing complete with embedded graphics can be produced simply by 
> passing the script to the core function htmlize (Phillipe Grosjean has 
> not only offered great suggestions, but provided a fancier function 
> named R2html). It is even possible to have the output magically appear 
> in your friendly local HTML browser when the script has been processed.
> 
> The package includes some basic descriptive functions that display "the 
> usual suspects" in formats that should not agitate those accustomed to 
> the vanilla listings that abound in the real world.
> 
> prettyR is intended to assist the R beginner in producing basic stats 
> right from the word "go". No knowledge beyond that of writing an R 
> script is required, but there is quite a bit of room to learn and 
> innovate. Have fun and please let me know if you break it.
> 
> Jim
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 

Thanks but I could not get R2html in prettyR to work:

> R2html(Rfile="/ophth/cornea/R/lme_4.R",
+   HTMLfile="/ophth/cornea/Reports/lme_4.html")
Error in CreateIndexFile(HTMLfile, basenavfile, baselistfile, title) :
unused argument(s) ( ...)

lme_3.r has the R script and lme_3.html is the html file I'd like to
create. The help file for R2html does not give an example.

> args(R2html)
function (Rfile, HTMLfile, echo = TRUE, split = FALSE, browse = TRUE,
title = "R listing", bgcolor = "#dd", ...)

What am I doing wrong? I can source the script file.

> args(CreateIndexFile)
function (HTMLbase, HTMLdir, title = "R listing")

Is there a problem in R2html's call to CreateIndexFile? The arguments
don't seem to match.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmer - Is this reasonable output?

2006-06-29 Thread Rick Bilonick
I'm estimating two models for data with n = 179 with four clusters (21,
70, 36, and 52) named siteid. I'm estimating a logistic regression model
with random intercept and another version with random intercept and
random slope for one of the independent variables.


fit.1 <- lmer(glaucoma~(1|siteid)+x1
+x2,family=binomial,data=set1,method="ML",
  control=list(usePQL=FALSE,msVerbose=TRUE))

Generalized linear mixed model fit using PQL
Formula: glaucoma ~ (1 | siteid) + x1 + x2
  Data: set1
 Family: binomial(logit link)
  AIC  BIClogLik deviance
 236.7448 249.4944 -114.3724 228.7448
Random effects:
 Groups NameVariance Std.Dev.
 siteid (Intercept) 0.05959  0.24411
number of obs: 179, groups: siteid, 4

Estimated scale (compare to 1)  0.464267

Fixed effects:
 Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.213779   0.688158 -3.2170 0.001296 **
x1   0.609028   0.293250  2.0768 0.037818 *
x2   0.025027   0.009683  2.5846 0.009749 **
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
   (Intr) x1
x1 -0.871
x2 -0.372 -0.024

> ranef(fit.1)
An object of class “ranef.lmer”
[[1]]
  (Intercept)
1 -0.05053772
2 -0.21592157
3  0.36643051
4 -0.04141520


fit.2 <- lmer(glaucoma~(x1|siteid)+x1
+x2,family=binomial,data=set1,method="ML",
  control=list(usePQL=FALSE,msVerbose=TRUE))

Generalized linear mixed model fit using PQL
Formula: glaucoma ~ (x1 | siteid) + x1 + x2
  Data: set1
 Family: binomial(logit link)
  AIC  BIClogLik deviance
 239.3785 258.5029 -113.6893 227.3785
Random effects:
 Groups NameVariance Std.Dev. Corr
 siteid (Intercept) 0.059590 0.24411
x1  0.013079 0.11436  0.000
number of obs: 179, groups: siteid, 4

Estimated scale (compare to 1)  0.4599856

Fixed effects:
  Estimate Std. Error z value Pr(>|z|)
(Intercept) -2.2137787  0.6911360 -3.2031 0.001360 **
x1   0.6090279  0.2995553  2.0331 0.042042 *
x2   0.0250268  0.0097569  2.5650 0.010316 *
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Correlation of Fixed Effects:
   (Intr) x1
x1 -0.854
x2 -0.372 -0.023

> ranef(fit.2)
An object of class “ranef.lmer”
[[1]]
  (Intercept)   x1
1 -0.05534800  0.009265667
2 -0.16991678 -0.052723584
3  0.27692026  0.137597965
4  0.01648737 -0.062232012

Note that the fixed coefficient estimates are identical for both models
and they are exactly equal to what glm gives ignoring the sites (but the
standard errors given by lmer are definitely larger - which would seem
reasonable). The se's for the fixed factors differ slightly between the
two models. Note also that the estimated random effect sd for siteid
intercept is identical for both models.

I ran both models using PROC NLMIXED in SAS. It gives be similar
estimates but not identical for the fixed effects and random effects.
The confidence intervals on the random effects for each site are very
large.

Am I getting these results because I don't really need to fit a random
effect for siteid? The estimated random effects for slope seem to say
that siteid matters. When I plot the data for each site with smoothing
splines it indicates reasonably different patterns between sites.

I don't think these models are that complicated and I have a reasonable
amount of data. I fit the fixed and random curves to the separate sites
(along with separate glm estimates for each site). As would be expected,
the random curves lie between the fixed effect curve and the glm curve.
But there seems to be a lot of shrinkage toward the fixed effect curve.
(The fixed effect curve fits the smoothing spline curve for all sites
combined very closely.)

If I specify method="ML" and use PQL, I get similar fixed effect
estimates (but not identical to glm). The random intercept sd about
doubles. I think SAS NLMIXED uses an approximation to the likelihood so
that may explain some of the differences.

One other thing that seems odd to me is the random intercept sd for site
id. It equals 0.24411 in both models. If I change x1 to, say x3 (an
entirely different variable), I still get 0.24411. However, the random
slope sd does change.

I just want to make sure I'm fitting a reasonable model and getting
reasonable estimates.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] lme - Random Effects Struture

2006-06-28 Thread Rick Bilonick
On Wed, 2006-06-28 at 11:04 -0400, harry wills wrote:
> Thanks for the help Dimitris,
> 
> However I still have a question, this time I'll be more specific,
> 
> the following is my SAS code
> 
> 
> 
> proc mixed data=Reg;
> class ID;
> model y=Time Time*x1 Time*x2 Time*x3 /S;
> random intercept Time /S type=UN subject=ID G GCORR V;
> repeated /subject = ID R RCORR;
> run;**
> 
> (Type =UN for random effects)
> 
> 
> 
> The eqivalent lme statement I am using is :
> 
> reglme  <- lme(y ~ Time+Time*x1+Time*x2+Time*x3, data=Reg, random = ~ Time |
> ID)
> 
> 
> 
> When I compare the results, the values differ by considerable margin; I
> suppose this is due to the Random effects covariance structure. R output
> tells me that the structure is
> 
> 
> 
> "Structure: General positive-definite, Log-Cholesky parametrization"
> 
> 
> 
> Hence the problem for me is how to control this structure in R. Any help
> would appreciated
> 
> Thanks
> 
> Harry

>From my understanding of SAS, a*b means the interaction of a and b. But
in R, a*b is shorthand for a + b + a:b where a:b is the interaction
term. The way you've written the lme formula, you have time showing up 4
times plus you have additional main effects x1, x2, and x3. Is this what
you want? Maybe I'm wrong but I don't think the SAS code and the R code
represent the same model.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lmer and mixed effects logistic regression

2006-06-26 Thread Rick Bilonick
On Fri, 2006-06-23 at 21:38 -0700, Spencer Graves wrote:
> Permit me to try to repeat what I said earlier a little more clearly: 
>   When the outcomes are constant for each subject, either all 0's or all 
> 1's, the maximum likelihood estimate of the between-subject variance in 
> Inf.  Any software that returns a different answer is wrong. This is NOT 
> a criticism of 'lmer' or SAS NLMIXED:  This is a sufficiently rare, 
> extreme case that the software does not test for it and doesn't handle 
> it well when it occurs.  Adding other explanatory variables to the model 
> only makes this problem worse, because anything that will produce 
> complete separation for each subject will produce this kind of 
> instability.
> 
>Consider the following:
> 
> library(lme4)
> DF <- data.frame(y=c(0,0, 0,1, 1,1),
>   Subj=rep(letters[1:3], each=2),
>   x=rep(c(-1, 1), 3))
> fit1 <- lmer(y~1+(1|Subj), data=DF, family=binomial)
> 
> # 'lmer' works fine here, because the outcomes from
> # 1 of the 3 subjects is not constant.
> 
>  > fit.x <- lmer(y~x+(1|Subj), data=DF, family=binomial)
> Warning message:
> IRLS iterations for PQL did not converge
> 
> The addition of 'x' to the model now allows complete separation for 
> each subject.  We see this in the result:
> 
> Generalized linear mixed model fit using PQL
> 
> Random effects:
>   Groups NameVariance   Std.Dev.
>   Subj   (Intercept) 3.5357e+20 1.8803e+10
> number of obs: 6, groups: Subj, 3
> 
> Estimated scale (compare to 1)  9.9414e-09
> 
> Fixed effects:
> Estimate  Std. Errorz value Pr(>|z|)
> (Intercept) -5.4172e-05  1.0856e+10  -4.99e-151
> x8.6474e+01  2.7397e+07 3.1563e-061
> 
> Note that the subject variance is 3.5e20, the estimate for x is 86 
> wit a standard error of 2.7e7.  All three of these numbers are reaching 
> for Inf;  lmer quit before it got there.
> 
> Does this make any sense, or are we still misunderstanding one 
> another?
> 
> Hope this helps.
> Spencer Graves
> 
Yes, thanks, it's clear. I had created a new data set that has each
subject with just one observation and randomly sampled one observation
from each subject with two observations (they are right and left eyes).
I'm not sure why lmer gives small estimated variances for the random
effects when it should be infinite. I ran NLMIXED on the original data
set with several explanatory factors and the variance component was in
the thousands.

I guess the moral is before you do any computations you have to make
sure the procedure makes sense for the data.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] lmer and mixed effects logistic regression

2006-06-21 Thread Rick Bilonick
On Wed, 2006-06-21 at 08:35 -0700, Spencer Graves wrote:
> You could think of 'lmer(..., family=binomial)' as doing a separate 
> "glm" fit for each subject, with some shrinkage provided by the assumed 
> distribution of the random effect parameters for each subject.  Since 
> your data are constant within subject, the intercept in your model 
> without the subject's random effect distribution will be estimated at 
> +/-Inf.  Since this occurs for all subjects, the maximum likelihood 
> estimate of the subject variance is Inf, which is what I wrote in an 
> earlier contribution to this thread.
> 
> What kind of answer do you get from SAS NLMIXED?  If it does NOT tell 
> you that there is something strange about the estimation problem you've 
> given it, I would call that a serious infelicity in the code.  If it is 
> documented behavior, some might argue that it doesn't deserve the "B" 
> word ("Bug").  The warning messages issued by 'lmer' in this case are 
> something I think users would want, even if they are cryptic.
> 
> Hope this helps.
> Spencer Graves
> 
I did send in an example with data set that duplicates the problem.
Changing the control parameters allowed lmer to produce what seem like
reasonable estimates. Even for the case with essentially duplicate
pairs, lmer and NLMIXED produce similar estimates (finite intercepts
also) although lmer's coefficient estimates are as far as I can tell the
same as glm but the standard errors are larger.

The problem I really want estimates for is different from this one
explanatory factor example.  The model I estimate will have several
explanatory factors, including factors that differ within each subject
(although the responses within each subject are the same). BTW, as far
as I know, the responses could be different within a subject but it
seems to be very rare.


Possibly the example I thought I sent never made it to the list. The
example is below.

Rick B.

###
# Example of lmer error message


I made an example data set that exhibits the error. There is a dump of
the data frame at the end.

First, I updated all my packages:

> sessionInfo()
Version 2.3.1 (2006-06-01)
i686-redhat-linux-gnu

attached base packages:
[1] "methods"   "stats" "graphics"  "grDevices" "utils"
"datasets"
[7] "base"

other attached packages:
 chron   lme4 Matrixlattice
   "2.3-3"  "0.995-2" "0.995-11"   "0.13-8"

But I still get the error.

For comparison, here is what glm gives:


> summary(glm(y~x,data=example.df,family=binomial))

Call:
glm(formula = y ~ x, family = binomial, data = example.df)

Deviance Residuals:
Min   1Q   Median   3Q  Max
-1.6747  -0.9087  -0.6125   1.1447   2.0017

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -0.4786 0.1227  -3.901 9.59e-05 ***
x 0.7951 0.1311   6.067 1.31e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 436.63  on 324  degrees of freedom
Residual deviance: 394.15  on 323  degrees of freedom
AIC: 398.15

Number of Fisher Scoring iterations: 4


Running lmer without any tweaks:

> (lmer(y~(1|id)+x,data=example.df,family=binomial))
Error in lmer(y ~ (1 | id) + x, data = example.df, family = binomial) :
Leading minor of order 2 in downdated X'X is not positive
definite
In addition: Warning message:
nlminb returned message singular convergence (7)
 in: LMEopt(x = mer, value = cv)

Running lmer with list(msVerbose=TRUE):

> (lmer(y~(1|
id)+x,data=example.df,family=binomial,control=list(msVerbose=TRUE)))
  0 -545.002:  44801.6
  1 -545.002:  44801.6
  2 -545.002:  44801.6
  3 -545.003:  44801.9
  4 -545.014:  44805.2
  5 -545.123:  44838.3
  6 -546.208:  45168.3
  7 -556.572:  48444.8
  8 -628.932:  78993.4
  9 -699.716:  127441.
 10 -771.102:  206437.
 11 -842.258:  333880.
 12 -913.501:  540319.
 13 -984.712:  874202.
 14 -1055.93: 1.41452e+06
 15 -1127.15: 2.28873e+06
 16 -1198.37: 3.70326e+06
 17 -1269.59: 5.99199e+06
 18 -1340.81: 9.69524e+06
 19 -1412.03: 1.56872e+07
 20 -1483.25: 2.53825e+07
 21 -1554.47: 4.10697e+07
 22 -1625.69: 6.64522e+07
 23 -1696.91: 1.07522e+08
 24 -1768.13: 1.73974e+08
 25 -1839.35: 2.81496e+08
 26 -1910.57: 4.55470e+08
 27 -1981.78: 7.36966e+08
 28 -2053.00: 1.19244e+09
 29 -2124.22: 1.92940e+09
 30 -2195.44: 3.12184e+09
 31 -2266.66: 5.05124e+09
 32 -2337.88: 8.17308e+09
 33 -2409.10: 1.32243e+10
 34 -2480.32: 2.13974e+10
 35 -2551.54: 3.46217e+10
 36 -2622.76: 5.60190e+10
 37 -2693.98: 9.06405e+10
 38 -2765.20: 1.46659e+11
 39 -2836.42: 2.37299e+11
 40 -2907.64: 3.83962e+11
 41 -2978.85: 6.21253e+11
 42 -3050.07: 1.00521e+12
 43 -31

Re: [R] lmer and mixed effects logistic regression

2006-06-21 Thread Rick Bilonick
On Tue, 2006-06-20 at 20:27 +0200, Göran Broström wrote:
> On 6/19/06, Rick Bilonick <[EMAIL PROTECTED]> wrote:
> > On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
> > > If I understand correctly Rick it trying to fit a model with random
> > > effects on a binary response when there are either 1 or 2 observations
> > > per group.
> 
> If you look at Rick's examples, it's worse than that; each group
> contains identical observations (by design?).
> 
> May I suggest:
> 
> > glm(y ~ x, family = binomial, data = unique(example.df))
> 
> I think lmer gives a very sensible answer to this problem.
> 
> Göran
> 
The paired responses happen to be always the same in the data set that I
have. My understanding is that they could differ, but rarely do. For the
particular single independent variable, it will always be the same for
each observation for a given subject. So I essentially have 2n
observations where there are n unique results. However, I want to add
additional independent variables where the measurements differ within a
subject (even though the response within the subject is the same).

I ran glm on the n subjects and the 2n for lmer and get similar
estimates and se's but not identical. With just one independent variable
where the observations are identical in each cluster, lmer gives
slightly smaller se's using all 2n. When I include a second independent
variable that varies within each subject, lmer gives larger standard
errors, about 25% larger for the independent variable that doesn't vary
within subjects and just slightly larger for the one that does vary.

I could create a data set where I take all subjects with just one
observation per subject and then randomly select one observation from
each pair for subjects who have both observations. But I'd rather not
have to randomly remove observations.

I would expect that when the responses and independent variable are the
same within each subject for all subjects, the residual error must be
zero after you account for a random effect for subjects.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] lmer and mixed effects logistic regression

2006-06-19 Thread Rick Bilonick
On Sun, 2006-06-18 at 13:58 +0200, Douglas Bates wrote:
> If I understand correctly Rick it trying to fit a model with random
> effects on a binary response when there are either 1 or 2 observations
> per group.  I think that is very optimistic because there is so little
> information available per random effect (exactly 1 or 2 bits of
> information per random effect). I'm not surprised that there are
> difficulties in fitting such a model.
> 
> Rick should try to monitor the iterations to see what is happening to
> the parameter estimates and perhaps should try the Laplace
> approximation to the log-likelihood.  This is done by adding method =
> "Laplace" and control = list(msVerbose = TRUE) to the call to lmer.
> This causes the PQL iterations to be followed by direct optimization
> of the Laplace approximation.  Because the problem is probably in the
> PQL iterations Rick may want to turn those off and do direct
> optimization of the Laplace approximation only.  In that case the
> control argument should be list(usePQL=FALSE, msVerbose = TRUE)
> 
> 
> On 6/17/06, Spencer Graves <[EMAIL PROTECTED]> wrote:
> >
> >
> > 'lmer' RETURNS AN ERROR WHEN SAS NLMIXED RETURNS AN ANSWER
> >
> >   Like you, I would expect lmer to return an answer when SAS
> > NLMIXED does, and I'm concerned that it returns an error message instead.
> >
> >   Your example is not self contained, and I've been unable to
> > get the result you got.  This could occur for several reasons.  First,
> > what do you get for "sessionInfo()"?  I got the following:
> >
> > sessionInfo()
> > Version 2.3.1 (2006-06-01)
> > i386-pc-mingw32
> >
> > attached base packages:
> > [1] "methods"   "stats" "graphics"  "grDevices" "utils" "datasets"
> > [7] "base"
> >
> > other attached packages:
> >lme4lattice Matrix
> >   "0.995-2"   "0.13-8" "0.995-10"
> >

I made an example data set that exhibits the error. There is a dump of
the data frame at the end.

First, I updated all my packages:

> sessionInfo()
Version 2.3.1 (2006-06-01)
i686-redhat-linux-gnu

attached base packages:
[1] "methods"   "stats" "graphics"  "grDevices" "utils"
"datasets"
[7] "base"

other attached packages:
 chron   lme4 Matrixlattice
   "2.3-3"  "0.995-2" "0.995-11"   "0.13-8"

But I still get the error.

For comparison, here is what glm gives:


> summary(glm(y~x,data=example.df,family=binomial))

Call:
glm(formula = y ~ x, family = binomial, data = example.df)

Deviance Residuals:
Min   1Q   Median   3Q  Max
-1.6747  -0.9087  -0.6125   1.1447   2.0017

Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept)  -0.4786 0.1227  -3.901 9.59e-05 ***
x 0.7951 0.1311   6.067 1.31e-09 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

(Dispersion parameter for binomial family taken to be 1)

Null deviance: 436.63  on 324  degrees of freedom
Residual deviance: 394.15  on 323  degrees of freedom
AIC: 398.15

Number of Fisher Scoring iterations: 4


Running lmer without any tweaks:

> (lmer(y~(1|id)+x,data=example.df,family=binomial))
Error in lmer(y ~ (1 | id) + x, data = example.df, family = binomial) :
Leading minor of order 2 in downdated X'X is not positive
definite
In addition: Warning message:
nlminb returned message singular convergence (7)
 in: LMEopt(x = mer, value = cv)

Running lmer with list(msVerbose=TRUE):

> (lmer(y~(1|
id)+x,data=example.df,family=binomial,control=list(msVerbose=TRUE)))
  0 -545.002:  44801.6
  1 -545.002:  44801.6
  2 -545.002:  44801.6
  3 -545.003:  44801.9
  4 -545.014:  44805.2
  5 -545.123:  44838.3
  6 -546.208:  45168.3
  7 -556.572:  48444.8
  8 -628.932:  78993.4
  9 -699.716:  127441.
 10 -771.102:  206437.
 11 -842.258:  333880.
 12 -913.501:  540319.
 13 -984.712:  874202.
 14 -1055.93: 1.41452e+06
 15 -1127.15: 2.28873e+06
 16 -1198.37: 3.70326e+06
 17 -1269.59: 5.99199e+06
 18 -1340.81: 9.69524e+06
 19 -1412.03: 1.56872e+07
 20 -1483.25: 2.53825e+07
 21 -1554.47: 4.10697e+07
 22 -1625.69: 6.64522e+07
 23 -1696.91: 1.07522e+08
 24 -1768.13: 1.73974e+08
 25 -1839.35: 2.81496e+08
 26 -1910.57: 4.55470e+08
 27 -1981.78: 7.36966e+08
 28 -2053.00: 1.19244e+09
 29 -2124.22: 1.92940e+09
 30 -2195.44: 3.12184e+09
 31 -2266.66: 5.05124e+09
 32 -2337.88: 8.17308e+09
 33 -2409.10: 1.32243e+10
 34 -2480.32: 2.13974e+10
 35 -2551.54: 3.46217e+10
 36 -2622.76: 5.60190e+10
 37 -2693.98: 9.06405e+10
 38 -2765.20: 1.46659e+11
 39 -2836.42: 2.37299e+11
 40 -2907.64: 3.83962e+11
 41 -2978.85: 6.21253e+11
 42 -3050.07: 1.00521e+12
 43 -3121.28: 1.62645e+12
 44 -3192.47: 2.63147e+12
 45 -3263.70: 4.25757e+12
 46 -3334.89: 6.88953e+12
 47 -3406.11: 1.11441e+13
 48 -3477.22: 1.80392e+13
 49 -3548.36: 2.91492e+13
 50 -3619.

Re: [R] lmer and mixed effects logistic regression

2006-06-19 Thread Rick Bilonick
On Sat, 2006-06-17 at 09:46 -0700, Spencer Graves wrote:
> 
> 'lmer' RETURNS AN ERROR WHEN SAS NLMIXED RETURNS AN ANSWER
> 
> Like you, I would expect lmer to return an answer when SAS
> NLMIXED does, and I'm concerned that it returns an error message instead.
> 
> Your example is not self contained, and I've been unable to
> get the result you got.  This could occur for several reasons.  First,
> what do you get for "sessionInfo()"?  I got the following:
> 
> sessionInfo()
> Version 2.3.1 (2006-06-01)
> i386-pc-mingw32
> 

I will try to create an example after updating the packages. I can't
send the actual data - the data I sent was made up.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] lmer and mixed effects logistic regression

2006-06-14 Thread Rick Bilonick
I'm using FC4 and R 2.3.1 to fit a mixed effects logistic regression.
The response is 0/1 and both the response and the age are the same for
each pair of observations for each subject (some observations are not
paired). For example:

id response age
10  30
10  30

21  55
21  55

30  37

41  52

50  39
50  39

etc.

I get the following error:

> (lmer(response~(1|id)+age,data=gdx,family=binomial))
Error in deviance(.Call("mer_coefGets", x, pars, 2, PACKAGE =
"Matrix")) :
Leading minor of order 2 in downdated X'X is not positive
definite

Similar problem if I use quasibinomial.

If I use glm, of course it thinks I have roughly twice the number of
subjects so the standard errors would be smaller than they should be.

I used SAS's NLMIXED and it converged without problems giving me
parameter estimates close to what glm gives, but with larger standard
errors. glmmPQL(MASS) gives very different parameter estimates.

Is it reasonable to fit a mixed effects model in this situation?

Is there some way to give starting values for lmer and/or glmmPQL?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Statistical Power

2006-05-24 Thread Rick Bilonick
On Wed, 2006-05-24 at 09:22 -0700, Charles C. Berry wrote:
> On Tue, 23 May 2006, Christopher Brown wrote:
> 
> > How can I compute a power analysis on a multi-factor within-subjects
> > design?
> 
> If you are capable of installing source packages and if you know what a 
> general linear hypothesis test is, you can:
> 
> download 'hpower' to your computer
> 
>   http://gd.tuwien.ac.at/languages/R/src/contrib/hpower_0.1-0.tar.gz
> 
> and install it. Then
> 
> library( hpower )
> ?glhpwr
> 

Downloading, unzipping the package on to FC4 running R 2.3.0 gives an
error message about a bad description file when trying to install
hpower.

> R CMD INSTALL hpower

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Ordinal Independent Variables

2006-05-22 Thread Rick Bilonick
When I run "lrm" from the Design package, I get a warning about
contrasts when I include an ordinal variable:

Warning message:
Variable ordfac is an ordered factor.
 You should set
options(contrasts=c("contr.treatment","contr.treatment"))
or Design will not work properly. in: Design(eval(m, sys.parent()))

I don't get this message if I use glm with family=binomial. It produces
linear and quadratic contrasts.

If it's improper to do this for an ordinal variable, why does glm not
balk?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Fix for augPred/gsummary problem (nlme library)

2006-05-17 Thread Rick Bilonick
On Wed, 2006-05-17 at 09:48 +, Mark Difford wrote:
> Dear R-users,
> 
> I am a newbie to this site and a relative new-comer to S/R, so please tread 
> lightly, for you tread...
> 
> There have been several posting relating to problems with augPred() from the 
> nlme library. Here is a "fix" for one of these problems which may lie at the 
> root of others.
> 
> In my case the problem with augPred() lay in gsummary(), which augPred() 
> uses, causing it to fail. [From mucking around &c using 
> getAnywhere("augPred.lme"), and setting: debug(gsummary).]
> 
> Further ferreting around showed that the data structures within gsummary() 
> are fine, but that any (numeric only?) variable that has a label attached to 
> it (in my case from using Harrell's Hmisc library) causes the following 
> sub-routine in gsummary() to fail:
> 
> debug: if (dClass == "numeric") {
> 
>   value[[nm]] <- as.vector(tapply(object[[nm]], groups, FUN[["numeric"]],
> ...)) 
> 
> } else {
> 
>   value[[nm]] <- as.vector(tapply(as.character(object[[nm]]),
> groups, FUN[[dClass]])) if (inherits(object[, nm], "ordered")) {
> value[[nm]] <- ordered(value[, nm], levels = levels(object[,
>   nm]))[drop: TRUE] }
>   else {
> value[[nm]] <- factor(value[, nm], levels = levels(object[,
>   nm]))[drop: TRUE] }
> 
> }
> 
> Error Message:
> 
> Error in "[[<-.data.frame"(`tmp`, nm, value = c(1, 1, 1, 1, 1, 1, 1, : 
> replacement has 170 rows, data has 5
> 
> The immediate problem is that dClass comes through as "labeled" rather than 
> as "numeric", and the object is erroneously passed through to the else{} 
> group.
> 
> In fact, the problem is general: any variable that carries the class 
> "labeled" will cause the sub-routine to choke, as will any variable with a 
> class attribute other than ' ordered' , e.g. POSIXt. This is true even if the 
> variable carrying this 'other' class attribute isn't used in any lme() 
> formula &c.
> 
> Code-wise the fix for this should be straight-forward. Though I've never 
> coded in R/S, it's clear that the authors of the package should be using 
> different conditional tests, something along the lines of 
> is.numeric(obj)/is.factor(obj), if that's possible.
> 
> Until a fix is posted, here is a work-around for groupedData() objects (and 
> for raw data frames). You need to do this for all variables in the 
> groupedData() object, even if you are not using them in your lme() call:
> 
> 1) Use contents(obj) from the Hmisc package to look for variables with class 
> attributes and labels. [You can also use str(obj); then look (i) for names in 
> quotes immediately after the colon, e.g. DateTime: 'POSIXct'), or (ii) Class 
> 'labeled' after the colon.] Remove these, or change them, using, e.g.:
> 
> class(obj$DateTime) <- NULL
> class(obj$AnyVariable) <- 'numeric' ## leaves the actual labels/units 
> intact so that you can later restore them.
> 
> 2) Execute your lme() statement &c on the object, e.g.:
> 
> test.1 <- lme(Chla ~ PO4, random=~1|Site, data=obj)## or simply: lme(obj)
> augPred(test.1)
> plot(augPred(test.1))
> 
> (Note that if you are using a data.frame() as your data object you will need 
> to supply a 'primary' statement to augPred(), e.g. augPred(test.1, 
> primary=~PO4).
> 
> Regards,
> 
> Mark Difford.
> 
> -
> Ph.D. candidate, Botany Department,
> Nelson Mandela Metropolitan University,
> Port Elizabeth, SA.

Is this related to the same problem? I fit an intercepts-only model
(both random and fixed):

> summary(fit.lme.1 <- lme(nflnas.diff~1,
+   random=~1|id,
+   data=nfl.diff.iopec.gd,method="ML"))
Linear mixed-effects model fit by maximum likelihood
 Data: nfl.diff.iopec.gd
  AIC  BIC   logLik
  561.682 567.8112 -277.841

Random effects:
 Formula: ~1 | id
(Intercept) Residual
StdDev:20.86548 28.10644

Fixed effects: nflnas.diff ~ 1
  Value Std.Error DF  t-value p-value
(Intercept) -26.3847.8022 47 -3.38161  0.0015

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max
-2.15240420 -0.49224313  0.06435735  0.51602333  2.78229869

Number of Observations: 57
Number of Groups: 10

> plot(augPred(fit.lme.1,level=0:1),layout=c(5,2),aspect=1)
Error in terms.default(formula, data = data) :
no terms component


If I replace:

nflnas.diff~1

with something like:

nflnas.diff~group

where group is a dichotomous factor, augPred works as expected.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Cannot load irr package

2006-05-16 Thread Rick Bilonick
On Tue, 2006-05-16 at 13:51 -0700, Phil Spector wrote:
> Rick -
>You didn't say what operating system you're using, but I suppose
> that it's UNIX-based, because you're using the tar.gz files.
>If this is the case, the problem can be resolved by setting the
> environmental variable LANG to an empty string.  If you're currently
> just typing "R" at the shell prompt, try
> 
> env LANG= R
> 
> (notice that you're setting LANG to an empty string, not to  "R").
> If you have access to the script that's invoked when you type R, you
> could set the environmental variable there to make the change permanent.
> 
>- Phil Spector

Thanks. That solves the problem but why is it necessary? (I'm using FC4
and R version 2.3.0.)

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Cannot load irr package

2006-05-16 Thread Rick Bilonick
The irr package seems to install correctly:

> install.packages("irr")
trying URL 'http://cran.us.r-project.org/src/contrib/irr_0.61.tar.gz'
Content type 'application/x-tar' length 13848 bytes
opened URL
==
downloaded 13Kb

* Installing *source* package 'irr' ...
** R
** data
** help
 >>> Building/Updating help pages for package 'irr'
 Formats: text html latex example
  agree texthtmllatex   example
  anxiety   texthtmllatex   example
  diagnoses texthtmllatex   example
  finn  texthtmllatex   example
  icc   texthtmllatex   example
  iota  texthtmllatex   example
  kappa2texthtmllatex   example
  kappam.fleiss texthtmllatex   example
  kappam.light  texthtmllatex   example
  kendall   texthtmllatex   example
  maxwell   texthtmllatex   example
  meancor   texthtmllatex   example
  meanrho   texthtmllatex   example
  print.icclist texthtmllatex   example
  print.irrlist texthtmllatex   example
  robinson  texthtmllatex   example
  video texthtmllatex   example
** building package indices ...
* DONE (irr)

The downloaded packages are in
/tmp/RtmpsZJJ1h/downloaded_packages



But it won't load. Any ideas?

> library(irr)
Error in parse(n = -1, file = file) : invalid multibyte character in
mbcs_get_next
Error: unable to load R code in package 'irr'


Thanks.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] gstat no longer handles 1-D data

2006-05-05 Thread Rick Bilonick
Has anyone else noticed that gstat no longer handles 1-D data?

> library(gstat)
> coordinates(pm.df) <- ~x
Error in validObject(.Object) : invalid class "SpatialPoints" object:
spatial.dimension should be 2 or more
> variogram(pm~1,~x,data=pm.df)
Error in validObject(.Object) : invalid class "SpatialPoints" object:
spatial.dimension should be 2 or more

The variogram call used to work. It stopped working when I upgraded to
the latest version of gstat.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Adding elements in an array where I have missing data.

2006-05-01 Thread Rick Bilonick
On Mon, 2006-05-01 at 13:26 -0400, John Kane wrote:
> This is a simple question but I cannot seem to find
> the answer.
> I have two vectors but with missing data and I want to
> add them together with
> the NA's being ignored.
> 
> Clearly I need to get the NA ignored.  na.action?
> 
> I have done some searching and cannot get na.action to
> help.
> This must be a common enough issue that the answer is
> staring me in the face
> but I just don't see it.
> 
> Simple example
> a <- c(2, NA, 3)
> b <- c(3,4, 5)
> 
> What I want is
> c <- a + b where
> c  is ( 5 , 4 ,8)
> 
> However I get
> c is (5,NA, 8)
> 
> What am I missing?  Or do I somehow need to recode the
> NA's as missing?  
> 
> Thanks

Your example shows that you are not ignoring the NA's but treating them
as zeroes. This may or may not make sense depending on the application.

Your comment "Or do I somehow need to recode the NA's as missing" is
puzzling. NA denotes a missing value.

If you insist on treating NA's as zeroes, then just replace the NA's
with zeroes before adding:

> a[is.na(a)] <- 0

Rick B.


> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
-- 
Assistant Professor
University of Pittsburgh School of Medicine
Department of Ophthalmology
412 648 9138
BST S 207

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] logistic regression model with non-integer weights

2006-04-16 Thread Rick Bilonick
On Sun, 2006-04-16 at 19:10 +0100, Ramón Casero Cañas wrote:

> Thanks for your suggestions, Michael. It took me some time to figure out
> how to do this in R (as trivial as it may be for others). Some comments
> about what I've done follow, in case anyone is interested.
> 
> The problem is a) abnormality is rare (Prevalence=14%) and b) there is
> not much difference in the independent variable between abnormal and
> normal. So the logistic regression model predicts that P(abnormal) <=
> 0.4. I got confused with this, as I expected a cut-off point of P=0.5 to
> decide between normal/abnormal. But you are right, in that another
> cut-off point can be chosen.
> 
> For a cut-off of e.g. P(abnormal)=0.15, Sensitivity=65% and
> Specificity=52%. They are pretty bad, although for clinical purposes I
> would say that Positive/Negative Predictive Values are more interesting.
> But then PPV=19% and NPV=90%, which isn't great. As an overall test of
> how good the model is for classification I have computed the area under
> the ROC, from your suggestion of using Sensitivity and Specificity.
> 
> I couldn't find how to do this directly with R, so I implemented it
> myself (it's not difficult but I'm new here). I tried with package ROCR,
> but apparently it doesn't cover binary outcomes.
> 
> The area under the ROC is 0.64, so I would say that even though the
> model seems to fit the data, it just doesn't allow acceptable
> discrimination, not matter what the cut-off point.
> 
> 
> I have also studied the effect of low prevalence. For this, I used
> option ran.gen in the boot function (package boot) to define a function
> that resamples the data so that it balances abnormal and normal cases.
> 
> A logistic regression model is fitted to each replicate, to a parametric
> bootstrap, and thus compute the bias of the estimates of the model
> coefficients, beta0 and beta1. This shows very small bias for beta1, but
> a rather large bias for beta0.
> 
> So I would say that prevalence has an effect on beta0, but not beta1.
> This is good, because a common measure like the odds ratio depends only
> on beta1.
> 
> Cheers,
> 
> -- 
> Ramón Casero Cañas
> 
> http://www.robots.ox.ac.uk/~rcasero/wiki
> http://www.robots.ox.ac.uk/~rcasero/blog
> 

The Epi package has function ROC that draws the ROC curve and computes
the AUC among other things.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Error Message from Variogram.lme Example

2006-03-13 Thread Rick Bilonick
When I try to run the example from Variogram with an lme object, I get
an error (although summary works):

R : Copyright 2005, The R Foundation for Statistical Computing
Version 2.2.1  (2005-12-20 r36812)
ISBN 3-900051-07-0
...

>  fm1 <- lme(weight ~ Time * Diet, BodyWeight, ~ Time | Rat)
Error: couldn't find function "lme"
>  Variogram(fm1, form = ~ Time | Rat, nint = 10, robust = TRUE)
Error: couldn't find function "Variogram"
> library(nlme)
>  fm1 <- lme(weight ~ Time * Diet, BodyWeight, ~ Time | Rat)
>  Variogram(fm1, form = ~ Time | Rat, nint = 10, robust = TRUE)
Error in "$<-.data.frame"(`*tmp*`, "n.pairs", value = c(160, 0, 160,
16,  :
replacement has 10 rows, data has 9


> summary(fm1)
Linear mixed-effects model fit by REML
 Data: BodyWeight
   AIC  BIClogLik
  1171.720 1203.078 -575.8599

Random effects:
 Formula: ~Time | Rat
 Structure: General positive-definite, Log-Cholesky parametrization
StdDev Corr
(Intercept) 36.9390723 (Intr)
Time 0.2484113 -0.149
Residual 4.4436052

Fixed effects: weight ~ Time * Diet
Value Std.Error  DF   t-value p-value
(Intercept) 251.65165 13.094025 157 19.218816  0.
Time  0.35964  0.091140 157  3.946019  0.0001
Diet2   200.66549 22.679516  13  8.847873  0.
Diet3   252.07168 22.679516  13 11.114509  0.
Time:Diet20.60584  0.157859 157  3.837858  0.0002
Time:Diet30.29834  0.157859 157  1.889903  0.0606
 Correlation:
   (Intr) Time   Diet2  Diet3  Tm:Dt2
Time   -0.160
Diet2  -0.577  0.092
Diet3  -0.577  0.092  0.333
Time:Diet2  0.092 -0.577 -0.160 -0.053
Time:Diet3  0.092 -0.577 -0.053 -0.160  0.333

Standardized Within-Group Residuals:
Min  Q1 Med  Q3 Max
-3.25558796 -0.42196874  0.08229384  0.59933559  2.78994477

Number of Observations: 176
Number of Groups: 16


Information on package 'nlme'

Description:

Package:   nlme
Version:   3.1-68.1
Date:  2006-01-05


Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Error in fun(...) : couldn't find function "assignInNamespace"

2006-02-06 Thread Rick Bilonick
I started my laptop, opened a shell and get this error message:

R : Copyright 2005, The R Foundation for Statistical Computing
Version 2.2.1  (2005-12-20 r36812)
ISBN 3-900051-07-0

lines deleted

Error in fun(...) : couldn't find function "assignInNamespace"
Error: .onLoad failed in 'loadNamespace' for 'Matrix'
Fatal error: unable to restore saved data in .RData

Here is some directory information:

-rw-rw-r--   1 chippy chippy 1222666 Feb  2 07:20 .RData
-rw---   1 chippy chippy   21897 Feb  2 07:20 .Rhistory

Is .RData corrupted? I can run R in other directories.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] factorial anova

2005-12-25 Thread Rick Bilonick
On Sun, 2005-12-25 at 23:01 -0300, Petra Wallem wrote:
> Hello  every body, I am trying to do a factorial anova analysis
> following this model:
> 
> model<-anova(lm(responsevariable~factorA*factorB))
> model<-anova(lm(luz$dosel~luz$estado*luz$Bosque))
> 
>  Df Sum Sq Mean Sq F valuePr(>F)
> estado 1 6931.1  6931.1 41.6455 7.974e-06 ***
> Bosque 1   36.636.6  0.21970.6456
> estado:Bosque  1   36.636.6  0.21970.6456
> Residuals 16 2662.9   166.4
> 
> Strange is that the sum of squares of the factor Bosque are identical to
> the SS of the interaction, and are non significant. But when I plot the
> data, the interaction surley is significant...
> 
> my data.frame looks as follows:
> 
> Bosque   estado  lux  dosel
> 1   deciduo pristino  703 88.56
> 2   deciduo pristino  800 90.64
> 3   deciduo pristino  150 95.84
> 4   deciduo pristino  245 87.52
> 5   deciduo pristino 1300 91.68
> 6   deciduo   activo 1900 26.16
> 7   deciduo   activo  840 59.44
> 8   deciduo   activo  323 69.84
> 9   deciduo   activo  112 75.04
> 10  deciduo   activo 1360 51.12
> 11 siemprev   activo  900 41.76
> 12 siemprev   activo  480 65.68
> 13 siemprev   activo  350 78.16
> 14 siemprev   activo  350 37.60
> 15 siemprev   activo  272 58.40
> 16 siemprev pristino  100 94.80
> 17 siemprev pristino   60 95.84
> 18 siemprev pristino   50 97.92
> 19 siemprev pristino  270 94.80
> 20 siemprev pristino  110 97.92
> 
> Dose some body understand what I am doing wrong??? I have been
> navigating at the R site search, but didn't found much posting on
> factorial anova.
> 
> In advance thanks a lot for your comments
> Petra
> 
> __
> R-help@stat.math.ethz.ch mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

It would help if you would use the "dump" function and paste the output
into an e-mail:

> dump("luz","")

Also, it's much easier to use "data=luz" as an argument in the lm
function rather than appending the data frame name to each variable. I
don't think that "model" contains the lm model output. It looks like you
are saving the anova table.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Strange Estimates from lmer and glmmPQL

2005-12-01 Thread Rick Bilonick
On Thu, 2005-12-01 at 10:13 +, Prof Brian Ripley wrote:
> On Thu, 1 Dec 2005, Martin Maechler wrote:
> 
> >>>>>> "Rick" == Rick Bilonick <[EMAIL PROTECTED]>
> >>>>>> on Wed, 30 Nov 2005 23:11:07 -0500 writes:
> >
> >Rick> I'm trying to fit a generalized mixed effects model to a data set 
> > where
> >Rick> each subject has paired categorical responses y (so I'm trying to 
> > use a
> >Rick> binomial logit link). There are about 183 observations and one
> >Rick> explanatory factor x. I'm trying to fit something like:
> >
> >Rick> (lmer(y~x+(1|subject)))
> >
> > If you want binomial you have to give ' family = binomial '  to lmer !
> 
> I noticed that, but he did say `something like'.  You need to specify the 
> family for gee and glmmPQL too.
> 
> I think the moral is to give the exact code you use.
> 
> > Further, always using  'data = ..' is generally recommended practice,
> > and I'd rather assign the result of lmer(.) than just print it,
> > i.e. (if you want to print too):
> >
> > (model1 <- lmer(y ~ x + (1|subject), data = , family = 
> > binomial))
> >
> > and in your case  y should be the 2-column matrix
> >   cbind(successes, failures)
> 
> Not necessarily.  If these are binary responses, you can just give y as 
> shown.
Sorry, here is the more complete information:

(lmer(y~x+(1|subject),data=mydata,family=binomial))

y consists of zeroes and ones. At the time I was able to post I was
working from memory unfortunately. I did use "family=binomial" for all
the models. I get the same results whether I assign the results or not.
I was just trying to give the basic syntax for the model. Sorry for any
confusion.

As I said, I think it has to do with the fact that the responses are so
highly correlated. The same data fails to converge when using SAS and
the glimmix macro (I don't yet have accesss to the new "proc glimmix".)
I also made up some artificial data sets and whenever the paired
responses were identical the same problem appeared. Unfortunately I
can't share the data sets.

Do I need to specify the correlation structure explicitly? I thought my
data set was similar to others that used the same type of model and
functions successfully.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] Strange Estimates from lmer and glmmPQL

2005-11-30 Thread Rick Bilonick
I'm trying to fit a generalized mixed effects model to a data set where
each subject has paired categorical responses y (so I'm trying to use a
binomial logit link). There are about 183 observations and one
explanatory factor x. I'm trying to fit something like:

(lmer(y~x+(1|subject)))

I also tried fitting the same type of model using glmmPQL from MASS. In
both cases, I get a t-statistic that is huge (in the thousands) and a
tiny p-value. (Just for comparison, if I use lrm ignoring the clustering
I get a t-statistic around 3 or so and what appears to be a reasonable
estimated coefficient which is very close to the estimated coefficient
using just one observation from each subject.

Most of the subjects have two responses and in almost all cases the
responses are identical although the explantory factor values are not
always identical for each subject.

If I use geeglm from geepack, I get reasonable estimates close to the
naive model results.

I also tried using the SAS glimmix macro to fit a generalized mixed
model and the routine does not converge.

Why does geeglm appear to work but not lmer and glmmPQL? Is this likely
to be due to my particular data set?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Using pakage foreign and to import SAS file

2005-11-14 Thread Rick Bilonick
On Mon, 2005-11-14 at 22:55 +, Walter R. Paczkowski wrote:
> Hi,
> 
> I'm struggling with foreign to import a SAS file.  The file, for lack of 
> imagination, is d.sas7bdat and is in my root directory (c:\) under Windows 
> XP.  When I type
> 
> read.ssd("c:\\", "d")
> 
> which I think I'm suppose to enter, I get
> 
> SAS failed.  SAS program at 
> C:\DOCUME~1\Owner\LOCALS~1\Temp\Rtmp32758\file19621.sas 
> The log file will be file19621.log in the current directory
> NULL
> Warning messages:
> 1: "sas" not found 
> 2: SAS return code was -1 in: read.ssd("c:\\", "d") 
> 
> I have SAS 9.1 running on my computer so SAS is there.  What am I doing wrong?
> 
> Thanks,
> 
> Walt
> 
> 
I've not used read.ssd but I've had good results with sas.get in Hmisc.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Hmisc latex function

2005-10-12 Thread Rick Bilonick
Charles Dupont wrote:

> Marc Schwartz (via MN) wrote:
>
>> On Tue, 2005-10-11 at 10:01 -0400, Rick Bilonick wrote:
>>
>>> I'm using R 2.2.0 on an up-to-date version of Fedora Core 4 with the
>>> latest version of Hmisc. When I run an example from the latex 
>>> function I
>>> get the following:
>>>
>>>
>>>> x <- matrix(1:6, nrow=2, dimnames=list(c('a','b'),c('c','d','enLine
>>>
>>>
>>> 2')))
>>>
>>>> x
>>>
>>>
>>>  c d enLine 2
>>> a 1 35
>>> b 2 46
>>>
>>>> latex(x)   # creates x.tex in working directory
>>>
>>>
>>> sh: line 0: cd: “/tmp/Rtmpl10983”: No such file or directory
>>> This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4)
>>> entering extended mode
>>> ! I can't find file `“/tmp/Rtmpl10983/file643c9869”'.
>>> <*> “/tmp/Rtmpl10983/file643c9869”
>>>
>>> Please type another input file name: q
>>> (/usr/share/texmf/tex/latex/tools/q.tex
>>> LaTeX2e <2003/12/01>
>>> Babel  and hyphenation patterns for american, french, german,
>>> ngerman, b
>>> ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch,
>>> esperanto, e
>>> stonian, finnish, greek, icelandic, irish, italian, latin, magyar,
>>> norsk, polis
>>> h, portuges, romanian, russian, serbian, slovak, slovene, spanish,
>>> swedish, tur
>>> kish, ukrainian, nohyphenation, loaded.
>>> File ignored
>>> xdvi-motif.bin: Fatal error: /tmp/Rtmpl10983/file643c9869.dvi: No such
>>> file.
>>>
>>>
>>> How can I fix this?
>>>
>>> Rick B.
>>
>>
>>
>> I get the same results, also on FC4 with R 2.2.0.
>>
>> I am cc:ing Frank here for his input, but a quick review of the code and
>> created files suggests that there may be conflict between the locations
>> of some of the resultant files during the latex system call. Some files
>> appear in a temporary R directory, while others appear in the current R
>> working directory.
>>
>> For example, if I enter the full filename:
>>  
>>   /tmp/RtmpC12100/file643c9869.tex
>>
>> at the latex prompt, I get:
>>
>>
>>> latex(x)
>>
>>
>> sh: line 0: cd: “/tmp/RtmpC12100”: No such file or directory
>> This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4)
>> entering extended mode
>> ! I can't find file `“/tmp/RtmpC12100/file643c9869”'.
>> <*> “/tmp/RtmpC12100/file643c9869”
>>
>> Please type another input file name: *** loading the extensions
>> datasource
>> /tmp/RtmpC12100/file643c9869.tex
>> (/tmp/RtmpC12100/file643c9869.tex
>> LaTeX2e <2003/12/01>
>> Babel  and hyphenation patterns for american, french, german,
>> ngerman, b
>> ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch,
>> esperanto, e
>> stonian, finnish, greek, icelandic, irish, italian, latin, magyar,
>> norsk, polis
>> h, portuges, romanian, russian, serbian, slovak, slovene, spanish,
>> swedish, tur
>> kish, ukrainian, nohyphenation, loaded.
>> (/usr/share/texmf/tex/latex/base/report.cls
>> Document Class: report 2004/02/16 v1.4f Standard LaTeX document class
>> (/usr/share/texmf/tex/latex/base/size10.clo))
>> (/usr/share/texmf/tex/latex/geometry/geometry.sty
>> (/usr/share/texmf/tex/latex/graphics/keyval.sty)
>> (/usr/share/texmf/tex/latex/geometry/geometry.cfg))
>> No file file643c9869.aux.
>> [1] (./file643c9869.aux) )
>> Output written on file643c9869.dvi (1 page, 368 bytes).
>> Transcript written on file643c9869.log.
>> xdvi-motif.bin: Fatal error: /tmp/RtmpC12100/file643c9869.dvi
>
>
>
> H,  It works for me.  Interesting.
>
> It almost looks like the temp dir is not being created, but thats not 
> possible because R does that.  It might be a Unicode issue with you 
> system shell.  Can you run this statement in R
>
> sys(paste('cd',dQuote(tempdir()),";",
> "echo Hello BOB > test.test",
> ";","cat test.test"))
>
>
> What version of Hmisc are you using?  What local are you using?
>
> Charles
>
I'm using Hmisc 3.0-7 (2005-09-15). I did an update.packages right after 
installing R 2.2.0. Here is the output I get:

sh: line 0: c: "/tmp/RtmpSp4207": No such file or directory
[1] "Hello BOB"

Thanks.

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

[R] Hmisc latex function

2005-10-11 Thread Rick Bilonick
I'm using R 2.2.0 on an up-to-date version of Fedora Core 4 with the
latest version of Hmisc. When I run an example from the latex function I
get the following:

> x <- matrix(1:6, nrow=2, dimnames=list(c('a','b'),c('c','d','enLine
2')))
> x
  c d enLine 2
a 1 35
b 2 46
> latex(x)   # creates x.tex in working directory
sh: line 0: cd: “/tmp/Rtmpl10983”: No such file or directory
This is pdfeTeX, Version 3.141592-1.21a-2.2 (Web2C 7.5.4)
entering extended mode
! I can't find file `“/tmp/Rtmpl10983/file643c9869”'.
<*> “/tmp/Rtmpl10983/file643c9869”

Please type another input file name: q
(/usr/share/texmf/tex/latex/tools/q.tex
LaTeX2e <2003/12/01>
Babel  and hyphenation patterns for american, french, german,
ngerman, b
ahasa, basque, bulgarian, catalan, croatian, czech, danish, dutch,
esperanto, e
stonian, finnish, greek, icelandic, irish, italian, latin, magyar,
norsk, polis
h, portuges, romanian, russian, serbian, slovak, slovene, spanish,
swedish, tur
kish, ukrainian, nohyphenation, loaded.
File ignored
xdvi-motif.bin: Fatal error: /tmp/Rtmpl10983/file643c9869.dvi: No such
file.


How can I fix this?

Rick B.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html

Re: [R] Box-Cox / data transformation question

2005-01-30 Thread Rick Bilonick
Landini Massimiliano wrote:
On Tue, 25 Jan 2005 15:42:45 +0100, you wrote:
|=[:o)  Dear R users,
|=[:o)  
|=[:o)  Is it reasonable to transform data (measurements of plant height) to the 
|=[:o)  power of 1/4? I´ve used boxcox(response~A*B) and lambda was close to 0.25.
|=[:o)  

IMHO (I'm far to be a statistician) no. I think that Box Cox procedure must be a
help to people that had none experience in data transforming. In fact data
transforming include other methods that Box Cox procedure can't perform as rank
transformation, arcsine square root percent transformation, hyperbolic inverse
sine, log-log, probit, normit  and logit.
Transformation is not simply an application of a formula to massive data. Is
preferable decide appropriate transformation knowing deepening how and from
where data were collected.
|=[:o)  Regards,
|=[:o)  Christoph
|=[:o)  
|=[:o)  __
|=[:o)  R-help@stat.math.ethz.ch mailing list
|=[:o)  https://stat.ethz.ch/mailman/listinfo/r-help
|=[:o)  PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


-
Landini dr. Massimiliano
Tel. mob. (+39) 347 140 11 94
Tel./Fax. (+39) 051 762 196
e-mail: numero (dot) primo (at) tele2 (dot) it
-
Legge di Hanggi: Più stupida è la tua ricerca, più verrà letta e approvata.
Corollario alla Legge di Hanggi: Più importante è la tua ricerca, meno verrà
capita.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html
 

Why are you using a double square root transformation? Is the 
transformation for the response variable? Transfromation is one way to 
help insure that the error distribution is at least approximately 
normal. So if this is the reason, it certainly could make sense. There 
is no unique scale for making measurements. We choose a scale that helps 
us analyze the data appropriately.

Rick B.
__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] X11/ graphics problem

2003-11-19 Thread Rick Bilonick
Stephen Henderson wrote:

Hello

I'm moving from using R 1.8 for Windows to using Linux (Redhat version 9)
and I cannot get any graphics. However everything else maths, models
equations is fine-- and much quicker. I built from the source file following
the commands in the install manual.
 



I've also had a look at the requirements under R/bin/linux/redhat/9/i386--
but I'm having some trouble mapping these to the gui installation names,
though Xfree86 which I presume contains libX11.so.6 is installed. This
document also mentions preference for 'gnorpm'. Whats that?
Has anyone else had this problem? have an idea what is likely missing? or
how to reconfigure R?
 

I have R 1.8 installed on 3 different systems running RH 9 using the 
precompiled binaries. R makes plots on all of them. One of the systems 
is a laptop (Toshiba Satellite 2535cds). For some reason on this system 
there is a readline problem and I have to use:

> R --no-readline

in order to run R but the graphics work fine. Before running 1.8 on this 
machine, I had compiled 1.7.1 (it took many many hours). R would run BUT 
I still had to use the --no-readline option AND the "x11" function would 
not work. (As a work around, I just used "pdf" and used the "system" 
function with gv to view the plots.) Since then I've installed 1.8 using 
the precompiled rpm available from the CRAN mirrors and it works except 
for the readline problem.

Maybe if you install the rpm your graphics will work. If not, "pdf" 
should work. (I'm assuming you have XFree86's Xwindows running.)

BTW, gnorpm is a graphical frontend to rpm. Although to install the one 
rpm for R 1.8, all you need is to do as root:

> rpm -i R-1.8.0-1.i386.rpm

or

> rpm -U R-1.8.0-1.i386.rpm

if upgrading from an earlier version of R. I always like to do:

> rpm -U --test R-1.8.0-1.i386.rpm

first, just to see if there are any problems before actually upgrading. 
I'm not sure why anyone would go through the trouble and time needed for 
compiling when the appropriate rpm is readily available.

Rick B.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Transfer Function Estimation

2003-11-05 Thread Rick Bilonick
Suppose you estimate the ARIMA(p,d,q) paramters for an input x[t] using 
the arima function. Is there an easy way to apply these values to the 
output y[t] for transfer function modeling? For example, if I estimate a 
(2,1,1) ARIMA for x[t] with ar(1) = 0.5882, ar(2) = -0.01517, and ma(1) 
= -0.9688, how can apply these to y[t] so I can then estimate the ccf 
between the two sets of pre-whitened values?

Rick B.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Still Cannot Install rimage in R-1.7.1 (RH 9.0) Even With fftw Installed

2003-10-07 Thread Rick Bilonick
I'm still having problems installing rimage - the installation can't 
find the fftw headers. As suggested, I installed the fftw rpm (for RH 9 
from freshrpms). It installed without any errors or warnings. Yet I get 
exactly the same error message - it can't find the fftw headers.

What do I have to do to get the headers?

Rick B.

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help


[R] Cannot Install rimage in R-1.7.1 (RH 9.0)

2003-10-06 Thread Rick Bilonick
The rimage install can't find the ffw header. Any idea why?

Rick B.

> install.packages("rimage")
trying URL `http://cran.r-project.org/src/contrib/PACKAGES'
Content type `text/plain; charset=iso-8859-1' length 130159 bytes
opened URL
.. .. .. .. ..
.. .. .. .. ..
.. .. ...
downloaded 127Kb
trying URL `http://cran.r-project.org/src/contrib/rimage_0.5-1.tar.gz'
Content type `application/x-tar' length 80044 bytes
opened URL
.. .. .. .. ..
.. .. 
downloaded 78Kb
* Installing *source* package 'rimage' ...
checking for g++... g++
checking for C++ compiler default output... a.out
checking whether the C++ compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking for gcc... gcc
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking how to run the C preprocessor... gcc -E
checking for egrep... grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking fftw.h usability... no
checking fftw.h presence... no
checking for fftw.h... no
configure: error: Sorry, can't find fftw header
ERROR: configuration failed for package 'rimage'
Delete downloaded files (y/N)?
The packages are in /tmp/Rtmp14443/Rinstdir327b23c6
Warning message:
Installation of package rimage had non-zero exit status in: 
install.packages("rimage")

__
[EMAIL PROTECTED] mailing list
https://www.stat.math.ethz.ch/mailman/listinfo/r-help