Re: [R] Abundance data ordination in R

2007-04-01 Thread José Rafael Ferrer Paris
There are many ways to do this, really. For example if you use
constrained (~ canonical) correspondence analysis the distance measure
between sites is Chi-square and  absences are not informative to the
analysis. Or you can use an ecological distance measure (similarity
indices like Soerensen, Bray-Curtis, Jaccard, and others) and perform
principal coordinates (=multidimensional scaling), etc. Read the
documentation and tutorials for the packages vegan, ade4 and labdsv. 

You might start your search at the page of Jari Oksanen:
http://cc.oulu.fi/~jarioksa/softhelp/vegan.html
or the one from Dave Roberts
http://ecology.msu.montana.edu/labdsv/R/
. The vegan tutorial was useful for me to learn to use vegan:
http://cc.oulu.fi/~jarioksa/opetus/metodi/vegantutor.pdf
If you need more indeep mathemathical details, you should take a look at
Daniel Chessels site:
http://pbil.univ-lyon1.fr/R/perso/pagechessel.html
 There are plenty of pdfs available for download (however, some are
suited for beginners, others require more background knowledge) . Be
warned: there is a large variety of techniques for multivariate analysis
with different properties and weaknesses, sometimes the most popular
analysis are not the most appropriate. Be sure of what you want and what
you are doing before you perform the analysis, the interpretation will
depend on the techniques applied.

I personally find ade4 implements many different techniques but is
poorly documented and some functionalities are somehow hidden, while
vegan provides more information about the functions and is perfect for
getting started. I haven't used labdsv yet. 
 
hope this help

JR

El dom, 01-04-2007 a las 09:20 -0700, Milton Cezar Ribeiro escribió:
 Dear R-gurus
 
 I have a data.frame with abundance data for species and sites which looks 
 like:
 mydf-data.frame(
  sp1=sample(0:10,5,replace=T),
  sp2=sample(0:20,5,replace=T),
  sp3=sample(0:4,5,replace=T),
  sp4=sample(0:2,5,replace=T))
 rownames(mydf)-paste(sites,1:5,sep=)
 
 I would like make an ordination analysis of these data and my worries is 
 about the zeros (absence of species) into the matrix. Up to I read (Gotelli 
 - A primir of ecological statistics, 2004), when I have abundance data I cant 
 compute Euclidian Distances because the zeros have the meaning of absence of 
 the species and not as zero counting. Gotelli suggests one make principal 
 coordinates analysis. I would like to here from you what you think about and 
 what is the best packages and functions to I compute my distance matrices and 
 do my ordination analysis. Can I considere zero as NA on my data.frame? Is 
 there a good PDF book available about Multivariate Analysis for abundance 
 data available on the web?
 
 Kind regards
 
 Miltinho
 Brazil
 
 __
 
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] cex in xlab, ylab and zlab of persp

2007-03-15 Thread José Rafael Ferrer Paris
I had similar problems, actually it is very difficult to customize persp
graphics, you should try wireframe in lattice instead. 
This reference might help to customize the wireframe plots: 
http://www.polisci.ohio-state.edu/faculty/lkeele/3dinR.pdf

JR

El mié, 14-03-2007 a las 20:40 -0700, Joseph Retzer escribió:
 I've had no success using what I think is the correct code to change the 
 default size of the x, y and z labels in a persp graph (i.e. cex.lab).   Is 
 there a particular specification to accomplish this?
 Many thanks,
 Joe Retzer
 
   [[alternative HTML version deleted]]
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] segfault with correlation structures in nlme

2007-03-13 Thread José Rafael Ferrer Paris
Hi out there,

I am trying to fit a species accumulation curve (increase in number of
species known vs. sampling effort) for multiple regions and several
bootstrap samples. The bootstrap samples represent different
arrangements of the actual sample sequence.

 I fitted a series of nlme-models and everything seems OK, but since the
observations are correlated I tried to include some correlation
structure. Since the ARMA classes were not succesful in reducing this
correlation, I tried spatial correlation functions with sampling effort
(measured in time units) as a distance measure. As a result I got
several segfault errors (which I don't know what they exactly mean =/).

I was wondering if it was an effect of the model or the data I used, but
I was able to reproduce the error messages using the Ovary data set and
the example in the Pinheiro  Bates book:

library(nlme)
data(ovary)
fm10var.lme -  lme(follicles ~ sin(2 * pi * Time) +
cos(2 * pi * Time),data=Ovary,
random=pdDiag(~sin(2*pi*Time)))
fm50var.lme - update(fm10var.lme,correlation=corARMA(p=1,q=1))
fm10var.nlme -  nlme(follicles ~ A + B * sin(2 * pi * w * Time) +
  C * cos(2 * pi * w * Time),data=Ovary,
  fixed= A+B+C+w~1,
  random=pdDiag(A+B+w~1),
   start = c(fixef(fm50var.lme),1))
plot(ACF(fm10var.nlme,maxLag=10),alpha=.05)
fm20var.nlme - update(fm10var.nlme,corr=corAR1(0.311))
fm30var.nlme - update(fm10var.nlme,corr=corARMA(p=0,q=2))

fm60var.nlme - update(fm10var.nlme,corr=corGaus(form=~Time))

 *** caught segfault ***
address 0x1075e501, cause 'memory not mapped'

Traceback:
 1: eval(expr, envir, enclos)
 2: eval(modelExpression, env)
 3: assign(modelValue, eval(modelExpression, env), envir = thisEnv)
 4: function (newPars) {if (!missing(newPars)) {for (i in
names(ind)) {assign(i, clearNames(newPars[ind[[i]]]), envir
= env)}  assign(modelValue, eval(modelExpression, env),
envir = thisEnv)}modelValue}(c(-4.11936939372863,
0.157676781855328, -0.071492289279548, -3.89562017836122,
3.06079106423361, -0.0260217128029253, -2.83650117790746,
1.62924792896056, 0.0607736472981399, -0.834700672739711,
0.07926271837803, 0.0403415470454326, -0.698687811226831,
-0.121079563050668, 0.032722843419347, 0.0404804003710554,
0.379937934267249, 0.087562808768161, 3.08430247504295,
2.08510442577037, 0.103812382483994, 1.1306260898,
-1.70151546937888, -0.043262231409, 2.37971674786747,
-1.04307471138899, 0.0217885202778396, 2.60583067635322,
-0.638495324795519, -0.137018044847307, 1.70105176178795,
-3.07372462709182, -0.0445309584408364, 12.3518546784787,
-3.22126648052618, -1.69049623029482, 0.907261039321137))
 5: .C(fit_nlme, thetaPNLS = as.double(c(as.vector(unlist(sran)),
sfix)), pdFactor = as.double(pdFactor(nlmeSt$reStruct)),
as.integer(unlist(rev(grpShrunk))), as.integer(unlist(Dims)),
as.integer(attr(nlmeSt$reStruct, settings))[-(1:3)], as.double(cF),
as.double(vW), as.integer(unlist(cD)), settings =
as.double(pnlsSettings), additional = double(NReal * (pLen + 1)),
as.integer(!is.null(correlation)), as.integer(!is.null(weights)),
nlModel, NAOK = TRUE)
 6: nlme.formula(model = follicles ~ A + B * sin(2 * pi * w * Time) +
C * cos(2 * pi * w * Time), data = Ovary, fixed = A + B + C + w ~ 1,
random = pdDiag(A + B + w ~ 1), start = c(fixef(fm50var.lme), 1),
corr = corGaus(form = ~Time))
 7: eval(expr, envir, enclos)
 8: eval(call, parent.frame())
 9: update.nlme(fm10var.nlme, corr = corGaus(form = ~Time))
10: update(fm10var.nlme, corr = corGaus(form = ~Time))

The above was under:

R version 2.4.1 (2006-12-18)
nlme Version:   3.1-79
Linux 2.6.15-27-386 (Ubuntu 6.06)

Then I tried the same on another computer and got:
 *** caught segfault ***
address 0x82e4902a, cause 'memory not mapped'
Violación de segmento

R Version 2.3.1 (2006-06-01)
nlme Version:   3.1-74
Linux 2.6.12-10-386  (Ubuntu 5.10)

I worked out the example of rats body weights, also in the book, and it
gave no error, so the problem seems to be in using the corLin and
corGaus functions with a nlme fit. Apparently is neither my version of R
and nlme, nor the dataset. Can someone try to reproduce this? Or does
someone knows the cause/explanation/fix? 

Thanks very much 

JR Ferrer-Paris

-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to open more windows to make more graphs at once!

2007-03-07 Thread José Rafael Ferrer Paris
Dear Monireh,
try using lattice:


library(lattice)
set.seed(1234)
dat - data.frame(months=rep(1:10,80),upper = rnorm(800)+1, 
  lower = rnorm(800)-1, 
  observed = rnorm(800), best.sim = rnorm(800), 
  stname = factor(gl(80, 10)))

jpeg(filename = Rplot%03d.jpeg)
xyplot(best.sim+observed+lower+upper~months|stname,dat,
   layout=c(3,4),type=b,auto.key=T)
dev.off()

It should produce almost exactly what you want. Lattice is a very
powerful tool for creating multiple graphics. You can customize the
individual plots within the lattice using panel and prepanel functions,
take a look at the documentation of the library and the documentation of
xyplot and panel.xyplot. Lattice is a little bit more complex than
normal plots in R, so you would have to spend more time in learning
how to use its functionality, but it is worth trying.

have a lot of fun

JR


El mié, 07-03-2007 a las 09:39 +0100, Faramarzi Monireh escribió:
 Dear R users,
 I have a data frame (test) including five columns of upper (numeric), lower 
 (numeric), observed (numeric), best_sim (numeric) and stname (factor with 80 
 levels, each level with different length). Now I would like to write a short 
 program to draw one graph as follow for each level of stname but I would like 
 also to draw each time 12 graphs for the 12 levels of stname in the same 
 graphic windows and save it as jpeg' file . This means at the end I will 
 have 7 (80 levels/12=7) graphic windows and 7 jpeg files each one with 12 
 graphs (the last one with 8 graphs) for the 12 levels of stname. I already 
 wrote the following script to do it each time for 12 levels of stname but I 
 have to change script each time for the another 12 levels [line 3 in the 
 script for example: for( i in levels(test$stname)[12:24))] and I do not know 
 how can I save the obtained graphs (seven graphic windows) as jpeg files 
 (e.g. plot1.jpeg, plot2.jpeg and so on). As I have 45 dataset like this it 
 would be gr!
  eat if somebody can help me to complete this script to do all together for a 
 dataset using a script.
 Thank you very much in advance for your cooperation,
 Monireh
 
 
   
 windows(9,9)
 par(mfrow = c(3,4))
 for( i in levels(test$stname)[1:12])
 { 
 data- test[test$stname==i,]
 xx - c(1:length(data$upper), length(data$upper):1)
 yy - c(data$upper, rev(data$lower))
 zz- data$observed
 tt- data$Best_Sim
 par(lab =c(10,15,2))
 plot.jpeg- plot(xx,yy, type=n, xlim=c(min(xx), max(xx)), 
 ylim=c(min(zz,yy,tt), max(yy,zz,tt)*1.4),
  main= i, xlab=Month (1990-2002),  ylab=Discharge(m3/s), font.axis=6)
 polygon(xx, yy, col=green,  border = NA)
 lines(zz, col=blue, lwd=1.5)
 lines(tt,col=red, lwd=1.5) 
 legend(length(zz)-60, max(yy,zz,tt)*1.45, c(Upper Limit, Lower Limit,  
 Observed,Best etimation)
 , lwd=c(10, 1,1.7,1.7), bty=n, col= c(green, white, blue,red))
  }
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] anova applied to a lme object

2007-03-07 Thread José Rafael Ferrer Paris
The variances of the random effects and the residual variances are given
by the summary function. Maybe VarCorr or varcomp gives you the answer
you are looking for:

library(nlme)
library(ape)
?VarCorr
?ape

JR
El mié, 07-03-2007 a las 13:09 +0100, Berta escribió:
 Hi R-users,
 
 when carrying out a multiple regression, say lm(y~x1+x2), we can use an 
 anova of the regression with summary.aov(lm(y~x1+x2)), and afterwards 
 evaluate the relative contribution of each variable using the global Sum of 
 Sq of the regression and the Sum of Sq of the simple regression y~x1.
 
 Now I would like to incorporate a random effect in the model, as some data 
 correspond to the same region and others not:  mylme- lme(y~x1+x2, random= 
 ~1|as.factor(region)). I would like to know, if possible, which is the 
 contribution of each variable to the global variability. Using anova(mylme) 
 produce an anova table (without the Sum of Sq column), but I am not sure how 
 can I derive the contribution of each variable from it, or even whether it 
 is nonsense to try, nor can I derive a measure of how much variability is 
 left unexplained.
 
 Sorry for the type of question, but I did not find a simple solution and 
 some researchers I work with love to have relative contributions to global 
 variability.
 
 Thanks a lot in advance,
 
 Berta
 
 
 
 
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] No years() function?

2007-03-07 Thread José Rafael Ferrer Paris
From the help of weekdays:

Note:

 Other components such as the day of the month or the year are very
 easy to compute: just use 'as.POSIXlt' and extract the relevant
 component.

Yet another option:

help(package=chron)

JR

El mié, 07-03-2007 a las 15:35 +, Sérgio Nunes escribió:
 Hi,
 
 I'm trying to aggregate date values using the aggregate function. For example:
 
 aggregate(data,by=list(weekdays(LM),months(LM)),FUN=length)
 
 I would also like to aggregate by year but there seems to be no
 years() function.
 Should there be one? Is there any alternative choice?
 
 Also, a hours() function would be great. Any tip on this?
 
 Thanks in advance!
 Sérgio Nunes
 
 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.
-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Is there a quick way to count the number of times each element in a vector appears?

2007-03-06 Thread José Rafael Ferrer Paris
El lun, 05-03-2007 a las 22:16 -0800, Dylan Arena escribió:

 So here is my question in a nutshell:
 Does anyone have ideas for how I might efficiently process a matrix
 like that returned by a call to combinations(n, r, rep=TRUE) to
 determine the number of repetitions of each element in each row of the
 matrix?  If so, I'd love to hear them!
 
 

here is an answer in a nutshell:

my.table - combinations(3,3,rep=TRUE)

## one possibility is 
apply(my.table,1,table)

## or better, in plain 
table(my.table,row(my.table))

look at the help pages of 
?table
?apply
?row

 Thanks very much for your time,
 Dylan Arena
 (Statistics M.S. student)
 

that took probably one minute of my time... so never mind


-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Identifying points in a plot that have duplicate values

2007-03-06 Thread José Rafael Ferrer Paris
A very simple solution is given in:

help(sunflowerplot,package=graphics)

##If you want to see it in action:
example(sunflowerplot)

El mar, 06-03-2007 a las 19:55 +1100, Jim Lemon escribió:
 David Lloyd wrote:
  I have code like this: - 
  
  #---
  --
  
  x=scan()
  0 0 0 0 0 1 2 3 4
  
  y=scan()
  1 1 1 2 2 1 3 4 5
  
  plot(x,y)
  
  identify(0,1,3) #Allows me to select manually to identify co-ordinate
  (0,1) as being duplicated 3 times
  identify(0,2,2) #Allows me to select manually to identify co-ordinate
  (0,2) as being duplicated 2 times
  #---
  --
  
  Is there not a way I can automatically display if points are duplicated
  and by how many times?
  
  I thought if I 'jittered' the points ever so slightly I could get an
  idea of how many duplicates there are but with 100 points the graph
  looks very messy.
  
 Hi David.
 In the plotrix package there are a few functions that might be helpful.
 
 cluster.overplot - moves ovelying points into a small cluster up to 9
 count.overplot - displays the number of overlying points
 sizeplot - displays symbols with size relative to the number of points
 
 Jim

-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] The plot of qqmath

2007-03-06 Thread José Rafael Ferrer Paris
qqmath is in the lattice package, which is not compatible with the
conventional graphics parameter. Why not using qqnorm and qqline
instead?
If you want to combine qqmath with other lattice plots you should look
at the documentation of the lattice and grid packages. If you read
carefully in

help(Lattice,package=lattice)


 Lattice plots are highly customizable via user-modifiable
 settings. However, these are completely unrelated to base graphics
 settings; in particular, changing 'par()' settings usually have no
 effect on lattice plots.

 help(Grid,package=grid)

Grid graphics provides an alternative to the standard R graphics.
 The user is able to define arbitrary rectangular regions (called
 _viewports_) on the graphics device and define a number of
 coordinate systems for each region.  Drawing can be specified to
 occur in any viewport using any of the available coordinate
 systems.

 Grid graphics and standard R graphics do not mix!

 Type 'library(help = grid)' to see a list of (public) Grid
 graphics functions.


El mar, 06-03-2007 a las 13:48 +0100, Serguei Kaniovski escribió:
 
 Hello,
 
 I would like to inlude the Q-Q plot by qqmath into a panel with other
 plots, say, using par(mfrow=c(1,2)). How can this be done given that
 qqmath refreshes the plotting window and there seems to be no series
 coming out of it?
 
 Thanks
 Serguei


hope that helps.
JR
-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] different random effects for each level of a factor in lme

2007-03-05 Thread José Rafael Ferrer Paris
I have an interesting lme - problem. The data is part of the Master
Thesis of my friend, and she had some problems analysing this data,
until one of her Jurors proposed to use linear mixed-effect models. I'm
trying to help her since she has no experience with R. I'm very used to
R but have very few experience with lme.

The group calls of one species of parrot were recorded at many
localities in mainland and islands. Within the localities the parrots
move in groups, several calls were recorded for each group but the calls
can not be separated by individuals.

We use the variable s1 to measure one property of the calls (the length
of the first part of the call). We are interested in explaining the
variability of the calls, one hypothesis is that variability of calls
tends to be greater in islands compared with mainland.

So we have 
s1   : as a measure of interest
loc  : as a grouping variable (locality)
grp  : as a grouping variable (group) nested in loc
isla : is an factor that identify which localities are in island 
   and which are in mainland (it is outer to loc)

I began with a simple model with fixed effects in isla (since there are
some differences in the length of s1) and nested random effects: 

f00 - lme(s1~isla,data=s1.ap,
   random=~1|loc/grp)

My final model should have fixed effects in isla, different nested
random for both levels of isla, and different error per stratum,
something like:

f11 -
  lme(s1~isla,data=s1.ap,
  random=~isla|loc/grp,
  weights=varIdent(form=~1|isla))

or perhaps:

f11b -
  lme(s1~isla,s1.ap,
  random=~isla-1|loc/grp,
  weights=varIdent(form=~1|isla))

Is this a valid formulation? I have seen that ~x|g1/g2 is usually used
for modelling random effects in (the intercept and slope of) covariates
and that ~1|g1/f1 is used to model interactions between grouping factors
and treatment factors. 

I fitted the above models (and a few other variants between the simpler
and the complex ones) and found f11 to be the best model, f11 and f11b
are both identical in terms of AIC or LR-Tests since they are the same
model with different parametrization (I guess...).

Now, I suppose I did everything right, and I want to compare the
variance decomposition in islands and mainland, I use 

 VarCorr(f11)

VarianceStdDev   Corr  
loc =   pdLogChol(isla)
(Intercept) 1643.5904   40.54122 (Intr)
islaT962.2991   31.02095 -0.969
grp =   pdLogChol(isla)
(Intercept)  501.7315   22.39936 (Intr)
islaT622.5393   24.95074 -0.818
Residual 547.0888   23.38993   

 VarCorr(f11b)

 VarianceStdDev   Corr 
loc =pdLogChol(isla - 1)   
islaI1643.4821   40.53988 islaI
islaT 168.6514   12.98659 0
grp =pdLogChol(isla - 1)   
islaI 501.7357   22.39946 islaI
islaT 209.8698   14.48688 0
Residual  547.0871   23.38989  

The variance for islands (islaI) is always greater  than the ones for
mainland (islaT) as expected, and the estimates of Intercept in f11 are
nearly equal to the estimates for islaI in f11b. However, the estimates
for islaT are completely different. It seems to me that the estimates in
f11b are the correct ones and the ones in f11 are obtained by
reparametrization of the variance-covariance matrix. Am I right?

I want to say what percentage of the variance is explained by each
level, something like this:

 tmp -
data.frame(I=c(1643.48,501.74,(547.09)),T=c(168.65,209.86,(0.7823097*547.09)))
 rownames(tmp) - c(loc,grp,res)
 t(t(tmp)/(colSums(tmp)))
  I   T
loc  0.6104349 0.2091125
grp  0.1863604 0.2602096
res  0.2032047 0.5306780

(0.7823097 was the result of varIdent for islaT)

If I compare the sum of variances in f11b for each level of isla with
the variances of the data frame I get similar results:

 colSums(tmp)
I T 
2692.3100  806.5038 
 tapply(s1.ap$s1,s1.ap$isla,var)
I T 
2417.1361  731.8165 

So I guess this is the right way to interpret the variances in the
fitted model. Or, is it not?

thanks,

JR

-- 
Dipl.-Biol. JR Ferrer Paris
~~~
Laboratorio de Biología de Organismos --- Centro de Ecología
Instituto Venezolano de Investigaciones Científicas (IVIC) 
Apdo. 21827, Caracas 1020-A 
República Bolivariana de Venezuela

Tel: (+58-212) 504-1452
Fax: (+58-212) 504-1088

email: [EMAIL PROTECTED]
clave-gpg: 2C260A95

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.