Re: [MORPHMET] help with "classify" in RRPP

2019-05-07 Thread Mike Collyer
Nicole,

I assume that your intention is to summarize a species probability from the 
several probabilities of specimens, if data sets are combined?  (I think you 
might have used “species” twice but meant “specimen" once, below).

If so, there are two ways you could do this.  One would be as you suggested, 
summarize the distribution of posterior probabilities for a species (mean, 
median, quartiles, etc.).  The other would be to calculate something akin to 
species means and use these as test data, based on the coefficients calculated 
from training data.  It might require some thought as to what the training data 
should be, as leave one out cross-validation would not make much sense.  
Calculating the posterior probability for a species mean after using individual 
specimens to estimate the mean also does not make sense.  However, a resampling 
procedure that arbitrarily divides the specimens into training and testing 
groups, using the first to estimate coefficients and the second to obtain a 
mean, could be used to generate a confidence interval for the posterior 
classification probabilities of a particular species to its and other species’s 
groups.

The second approach would involve some scripting.  The first approach can be 
done quickly with the by() and summary() functions, e.g.,

by(my.posterior.probs, species, summary)

Hope that helps!
Mike

> On May 7, 2019, at 9:13 AM, Nicole Ibagón  wrote:
> 
> Dear all
> I'm working with five different datasets (lateral and dorsal view of the 
> skull and jaw) of a neotropical bat genus. My research question is if one 
> species (described with a single sample), is a synonym of one of the other 
> species of the genus. I used classify function of RRPP for this purpose, and 
> it solved my question generating one posterior probability for each species 
> of each dataset. However, I would like to know there is a way to generate a 
> single posterior probability for each species.
> Should I join all the datasets before doing the classification analysis? Or 
> should I average the posterior probabilities of all the datasets? Is there a 
> better way to do it?
> Thanks
> 
> -- 
> Nicole Estefanía Ibagón Escobar
> PhD Candidate in Ecology  - UFV (Brazil) 
> BSc Marine Biologist  - Utadeo (Colombia)
> ResearchGate 
> Curriculo CVLAC 
> 
> Curriculo lattes 
> 
> http://evolutionlbe.wix.com/lbeufv 
> 
> < <<  
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org 
> .

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.


Re: [MORPHMET] Re: semilandmarks in biology

2018-11-06 Thread Mike Collyer
Andrea,

I am intrigued by your initial comment about adding covariance that was 
apparently absent.  I tend to think of the problem from the other perspective 
of not accounting for covariance that should be present.  As a thought 
experiment (that could probably be simulated, and maybe I am not correct in my 
thinking), I like to think of two landmark configurations that are the same in 
all regards except for one curve, where two groups have distinctly different 
curves but maybe would not be obviously distinctively different if an 
insufficient number of semi-landmarks (or none) were used to characterize the 
curve.  If one were to (maybe simulate this example and) use one sparse 
representation of landmarks and one dense representation, perform a 
cross-validation classification analysis, and calculate posterior 
classification probabilities (let’s assume equal sample sizes and, therefore, 
equal prior probabilities), I would expect that the posterior probabilities of 
the dense landmark configuration would better assign specimens to the 
appropriate process that generated them (i.e., their correct groups).  The 
posterior probabilities would be closer to 0 and 1 because of the “added 
covariance”, as reflected by the squared generalized Mahalanobis distances, 
based on the pooled within-group covariance.  The added covariance would be 
essential for the posterior probabilities, if the sparse configurations 
produced similar generalized distances to group means, and therefore, similar 
posterior probabilities for classification.

I’m not sure adding covariance is an issue.  To me it simply changes the 
hypothetical (null) covariance structure, which Philipp mentioned should 
probably not be assumed to be independent (isotropic).  I think your example 
might best highlight that a different multivariate normal distribution of 
residuals is to be expected with a different configuration.

Cheers!
Mike


> On Nov 6, 2018, at 12:34 PM, alcardini  wrote:
> 
> Yes, but doesn't that also add more covariance that wasn't there in
> the first place?
> Neither least squares nor minimum bending energy, that we minimize for
> sliding, are biological models: they will reduce variance but will do
> it in ways that are totally biologically arbitrary.
> 
> In the examples I showed sliding led to the appearance of patterns
> from totally random data and that effect was much stronger than
> without sliding.
> I neither advocate sliding or not sliding. Semilandmarks are different
> from landmarks and more is not necessarily better. There are
> definitely some applications where I find them very useful but many
> more where they seem to be there just to make cool pictures.
> 
> As Mike said, we've already had this discussion. Besides different
> views on what to measure and why, at that time I hadn't appreciated
> the problem with p/n and the potential strength of the patterns
> introduced by the covariance created by the superimposition (plus
> sliding!).
> 
> Cheers
> 
> Andrea
> 
> On 06/11/2018, F. James Rohlf  > wrote:
>> I agree with Philipp but I would like to add that the way I think about the
>> justification for the sliding of semilandmarks is that if one were smart
>> enough to know exactly where the most meaningful locations are along some
>> curve then one should just place the points along the curve and
>> computationally treat them as fixed landmarks. However, if their exact
>> positions are to some extend arbitrary (usually the case) although still
>> along a defined curve then sliding makes sense to me as it minimizes the
>> apparent differences among specimens (the sliding minimizes your measure of
>> how much specimens differ from each other or, usually, the mean shape.
>> 
>> 
>> 
>> _ _ _ _ _ _ _ _ _
>> 
>> F. James Rohlf, Distinguished Prof. Emeritus
>> 
>> 
>> 
>> Depts. of Anthropology and of Ecology & Evolution
>> 
>> 
>> 
>> 
>> 
>> From: mitte...@univie.ac.at 
>> Sent: Tuesday, November 6, 2018 9:09 AM
>> To: MORPHMET 
>> Subject: [MORPHMET] Re: semilandmarks in biology
>> 
>> 
>> 
>> I agree only in part.
>> 
>> 
>> 
>> Whether or not semilandmarks "really are needed" may be hard to say
>> beforehand. If the signal is known well enough before the study, even a
>> single linear distance or distance ratio may suffice. In fact, most
>> geometric morphometric studies are characterized by an oversampling of
>> (anatomical) landmarks as an exploratory strategy: it allows for unexpected
>> findings (and nice visualizations).
>> 
>> 
>> 
>> Furthermore, there is a fundamental difference between sliding semilandmarks
>> and other outline methods, including EFA. When establishing correspondence
>> of semilandmarks across individuals, the minBE sliding algorithm takes the
>> anatomical landmarks (and their stronger biological homology) into account,
>> while standard EFA and related techniques cannot easily combine point
>> homology with curve or surface homology. 

Re: [MORPHMET] Re: Are more semi landmarks better??

2018-11-06 Thread Mike Collyer
Philipp’s message below felt a little like a déjà vu moment.  I checked the 
Morphmet archives and sure enough, we had a similar thread back in late 
May/Early June, 2017.  Diego, you might want to check that thread, as a lot of 
what was discussed is relevant to your current questions.

Cheers!
Mike

> On Nov 6, 2018, at 5:33 AM, mitte...@univie.ac.at wrote:
> 
> I'd like to respond to your question because it comes up so often.
> 
> As noted by Carmelo in the other posting, a large number of variables 
> relative to the number of cases can lead to statistical problems. But often 
> it does not.
> 
> In all analyses that treat each variable separately - including the 
> computation of mean shapes and shape regressions - the number of variables 
> does NOT matter! Also in principal component analysis (PCA) and between-group 
> PCA there is NO restriction on the number of variables. However, the 
> distribution of landmarks across the organism can influence the results. 
> E.g., if one part - say the face - is covered only by a few anatomical 
> landmarks, and another part - e.g., the neurocranium - by many semilandmarks, 
> the latter one will dominate PCA results. But this holds true for all kinds 
> of landmarks and variables, not only for semilandmarks.
> 
> Analyses that involve the inversion of a covariance matrix - such as multiple 
> regression, CVA, relative eigenanalysis, reduced rank regression, and 
> parametric multivariate tests - require a clear excess of cases over 
> variables. In any truly multivariate setting (such as geometric 
> morphometrics), these analyses - if unavoidable - should ALWAYS be preceded 
> by some sort of variable reduction and/or factor analysis. Again, this is not 
> specific to semilandmarks.
> 
> Partial least squares (PLS) is somewhat in-between these to groups. As shown 
> in Bookstein's 2016 paper, the singular values (maximal covariances) in PLS 
> can be strongly inflated if the number of variables is large compared to the 
> number of cases. The singular vectors, however, are more stable.
> 
> Essentially, the number of semilandmarks should be determined based on the 
> anatomical details to be captured. More semilandmarks are not "harmful," 
> perhaps just a waste of time.
> 
> Best,
> 
> Philipp Mitteroecker
> 
> 
> 
> 
>  
> 
> 
> Am Montag, 5. November 2018 18:52:57 UTC+1 schrieb Diego Ardón:
> Good day everybody, I actually have twoI'd like to respond to your question 
> because it comes up so often.
> 
> As noted by Carmelo in the other posting, a large number of variables 
> relative to the number of cases can lead to statistical problems. But often 
> it does not.
> 
> In all analyses that treat each variable separately - including the 
> computation of mean shapes and shape regressions - the number of variables 
> does NOT matter! Also in principal component analysis (PCA) and between-group 
> PCA there is NO restriction on the number of variables. However, the 
> distribution of landmarks across the organism can influence the results. 
> E.g., if one part - say the face - is covered only by a few anatomical 
> landmarks, and another part - e.g., the neurocranium - by many semilandmarks, 
> the latter one will dominate PCA results. But this holds true for all kinds 
> of landmarks and variables, not only for semilandmarks.
> 
> Analyses that involve the inversion of a covariance matrix - such as multiple 
> regression, CVA, relative eigenanalysis, reduced rank regression, and 
> parametric multivariate tests - require a clear excess of cases over 
> variables. In any truly multivariate setting (such as geometric 
> morphometrics), these analyses - if unavoidable - should ALWAYS be preceded 
> by some sort of variable reduction and/or factor analysis. Again, this is not 
> specific to semilandmarks.
> 
> Partial least squares (PLS) is somewhat in-between these to groups. As shown 
> in Bookstein's 2016 paper, the singular values (maximal covariances) in PLS 
> can be strongly inflated if the number of variables is large compared to the 
> number of cases. The singular vectors, however, are more stable.
> 
> Essentially, the number of semilandmarks should be determined based on the 
> anatomical details to be captured. More semilandmarks are not "harmful", 
> perhaps just a waste of time.
> 
> Best,
> 
> Philipp Mitteroecker
> 
> 
> 
> 
>  
>  questions here regarding semi-landmarks:
> 
> So, I was adviced to use semi-landmarks, I placed them with MakeFan8, saved 
> the files as images and then used TpsDig to place all landmarks, however I 
> didn't make any distinctions between landmarks and semi-landmarks. What 
> unsettles me is (1) that I've recently comed across the term "sliding 
> semi-landmarks", which leads me to believe semi-landmarks should behave in a 
> particular way. 
> 
> The second thing that unsettles me is whether "more semi-landmarks" means a 
> better analysis. I can understand that most people wouldn't use 65 
> 

[MORPHMET] Re: Conceptual clarification of plotting shape deformation grids in geomorph

2018-11-05 Thread Mike Collyer
Igor,

Yes, you can do that and the help page for that function has examples to do 
that.

Mike

> On Nov 5, 2018, at 5:26 AM, Igor Talijančić  wrote:
> 
> Hello everyone,
> 
> Just a question regarding the plotting of deformation grinds of the 
> trajectory analysis (e.g. pupfish or plethodon data). Can shape.predictor 
> function be used for visualizing TA$pc.means since TA$pc.data corresponds to 
> PC scores obtained for Y.gpa$coords?
> 
> 
> 
> Thank you for your given time and consideration.
> 
> 
> 
> Sincerely,
> 
> Igor
> 
> 
> Dana srijeda, 25. srpnja 2018. u 14:42:41 UTC+2, korisnik javiersantos3 
> napisao je:
> Hello Carmelo and Mike,
> 
> Thanks for the quick response! I see things now clearer, especially with the 
> examples you have both provided. Sometimes one gets disoriented in the 
> abstractness of shape space and coding ;-P  Thanks again!
> 
> 
> Best wishes,
> Javier
> 
> 
> From: Mike Collyer >
> Sent: Wednesday, July 25, 2018 2:29:38 PM
> To: Javier Santos
> Cc: Morphomet Mailing List
> Subject: Re: Conceptual clarification of plotting shape deformation grids in 
> geomorph
>  
> Javier,
> 
> First your plotting question.  The plot.trajectory.analysis function is an S3 
> generic plot function, which means you can modify the plot as you like.  You 
> do this easiest with the points function.  Here is an example, using the help 
> page example, which hopefully makes sense for you:
> 
> data(plethodon) 
> Y.gpa <- gpagen(plethodon$land)   
> gdf <- geomorph.data.frame(Y.gpa, species = plethodon$species, site = 
> plethodon$site)
> 
> TA <- trajectory.analysis(coords ~ species*site, data=gdf)
> summary(TA, angle.type = "deg")
> plot(TA)
> # Augment plot with the following code
> points(TA$pc.data, pch=19, col = "blue”) # turn all points blue
> points(TA$pc.data, pch=19, col = TA$groups) # change points to different 
> colors, by group
> One can modify plots as desired but you might need to learn how to use 
> graphical parameters in order to do it.  See the help for the function, par, 
> to know how to do that.
> 
> Second, since PC scores are Procrustes residuals (coordinates) projected onto 
> PC axes, there is a direct correspondence between an observation’s set of 
> coordinates and its PC scores.  If you perform trajectory analysis, the 
> $means object has the coordinates for the means (trajectory points).  You 
> simply have to rearrange the values with arrayspecs to generate deformation 
> grids.  The $pc.data is a matrix of PC scores whose rows correspond to the 
> coordinates in the gpagen object.  For example, TA$pc.data[5,] is a set of PC 
> scores for Y.gpa$coords[,,5].
> 
> Finally, for your last question, the function shape.predictor does exactly 
> what you seek.  The help page has examples that should help you (on e 
> specifically for allometry).
> 
> Cheers!
> Mike
> 
>> On Jul 25, 2018, at 7:17 AM, Javier Santos > wrote:
>> 
>> Hello Morphometricians,
>> 
>> I was hoping someone could clarify the concept of plotting shape deformation 
>> grids from the geomorph output. I am confused at the moment because the 
>> output of most functions (eg. trajectory.analysis()) gives PC values or 
>> regression scores, while most of the plotting functions I know (eg. 
>> plotRefToTarget(), plotTangentSpace(), plotAllSpecimens()) require LM 
>> coordinates. I am sure that the conceptual framework to plot the shape 
>> deformation grids corresponding from the PC/regression values of the 
>> functions' output should not be too complicated, but I am currently lost how 
>> to do so with the coding and do not have a working example. 
>> 
>> I will use my current analysis as an example from which to work upon. I have 
>> ran a trajectory.analysis() on a three species sample:
>> 
>> ontogeny <- trajectory.analysis(M2d ~ 
>> species*age,f2=NULL,iter=999,seed=NULL,data=gdf)
>> 
>> and plot the results:
>> 
>> x11(); 
>> plot(ontogeny,group.cols=c("red","blue","green"),pt.scale=1.5,pt.seq.pattern=c("black","gray","white"))
>> 
>> The following code plots the trajectory in the corresponding PC1-PC2 
>> morphospace with each species' trajectory in a different color, however, 
>> although the lines are different colors, the points corresponding to each 
>> individual are grey for all species. How can I color these points by species 
>> group without exporting the data?
>> I would also like to plot the shape deformation that corresponds to each PC 
>> axis like the function p

Re: [MORPHMET] % of overlap of convex hulls in 2D scatterplots

2018-08-21 Thread Mike Collyer
Andrea,

There are two ways to do this, one precise and one less precise.  The less 
precise method is to calculate the convex hull areas of three hulls: the two of 
interest and one comprising the points of both. Subtract the total area of the 
comprehensive hull from the sum of the two groups hulls, then divide by area of 
the comprehensive hull. This ratio will tell you how much of one hull split in 
two will overlap compared to the area of one single hull. 

The more precise method is to determine the points shared in common by two 
hulls and calculate the convex hull area of this set, which can then be divided 
by the area found as the sum of the two separate hulls minus this overlap area. 

The difficulty would be detecting points that occur within bounds 
automatically. An algorithm can be used to ask if a point, when added to hull 
points, changes the hull area. If yes, it exists outside the hull; if no, it 
must occur within the hull boundaries. Points that change neither hull areas 
are found within the boundaries of both, thus the region of overlap. Running 
such an algorithm on every point in a data set will determine the shared 
points. 

Have fun!
Mike



Sent from my iPhone

> On Aug 21, 2018, at 6:29 PM, alcardini  wrote:
> 
> Dear All,
> please, does anyone know if there's an R script or package to compute
> the exact % of overlap of convex hulls in 2D scatterplots?
> I found something but was developed for high dimensional spaces and
> seems to compute that using an approximation. In 2D it should not be
> too difficult to get the exact %, I guess.
> 
> Thanks a lot in advance.
> Cheers
> 
> Andrea
> 
> 
> 
> -- 
> 
> Dr. Andrea Cardini
> Researcher, Dipartimento di Scienze Chimiche e Geologiche, Università
> di Modena e Reggio Emilia, Via Campi, 103 - 41125 Modena - Italy
> tel. 0039 059 2058472
> 
> Adjunct Associate Professor, School of Anatomy, Physiology and Human
> Biology, The University of Western Australia, 35 Stirling Highway,
> Crawley WA 6009, Australia
> 
> E-mail address: alcard...@gmail.com, andrea.card...@unimore.it
> WEBPAGE: https://sites.google.com/site/alcardini/home/main
> 
> FREE Yellow BOOK on Geometric Morphometrics:
> http://www.italian-journal-of-mammalogy.it/public/journals/3/issue_241_complete_100.pdf
> 
> ESTIMATE YOUR GLOBAL FOOTPRINT:
> http://www.footprintnetwork.org/en/index.php/GFN/page/calculators/
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org.
> 

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.



Re: [MORPHMET] help us in multifactor Manova

2018-05-18 Thread Mike Collyer
Mahen,

I think the issue is because you might not have all levels of Location matched 
with all Levels of Stages.  It appears you have 5 stages and 2 locations.  For 
the model that worked, you will notice 5 rows in your ANOVA table.  In the one 
that didn’t, it was attempting to create a table with 9 rows, which would 
include the 4 interactions created by your new model.  (Sorry I did not catch 
this before.)  The error implies that the the number of SS items and df items 
do not match, which suggests to me that two of the interactions do not work, 
based on your sample.

You might consider the following,

model.matrix( ~ Csize * ind$Stages * ind$Location)

This will be a large matrix.  Check the matrix for columns of all 0s.  If you 
find that, it suggests that you are missing some factor interactions.  If this 
is not the issue, then maybe you could send me your data and R script and I’ll 
try to figure out if there is a different problem.

Best,
Mike

> On May 18, 2018, at 2:56 AM, mahendiran mylswamy <mahenr...@gmail.com> wrote:
> 
> My sample size is 147 not either 9 or 7 in any case. I agree that the 
> computer displays these numbers! don't know why?
> 
> Nevertheless, in this analysis the number of specimens (dependent shape 
> variable) is 147, obviously, with the equal number of observations for each 
> independent variables (here, we used two factors ind$Stages and ind$Location 
> ).
> 
>  It is interesting to note that I am getting good results using single factor 
> Manova; dependent shape variable over, either  ind$stages or ind$location, at 
> a given time,  though with/without interactions as well. Even, it works fine 
> with more factors if I model without any effect of interaction, and the 
> results are given below  
> 
> > Paint<-procD.lm(coords ~ Csize + ind$Stages + ind$Location, data = 
> > Paintgeo147, iter = , RRPP = FALSE, print.progress = FALSE, int.first = 
> > FALSE)
> > Paint3factor
>  
> Call:
> procD.lm(f1 = coords ~ Csize + ind$Stages + ind$Location, iter = ,  
> RRPP = FALSE, int.first = FALSE, data = Paintgeo147, print.progress = 
> FALSE) 
>  
>  
>  
> Type I (Sequential) Sums of Squares and Cross-products
> Randomization of Raw Values used
> 1 Permutations
>  
>   Df  SS   MS  Rsq   F  Z Pr(>F)
> Csize  1 0.04688 0.046880 0.064257 13.4431 7.9867  1e-04 ***
> ind$Stages 4 0.17575 0.043938 0.240899 12.5994 8.3726  1e-04 ***
> ind$Location   1 0.01871 0.018713 0.025650  5.3662 3.2145 0.0042 ** 
> Residuals140 0.48822 0.003487   
> Total146 0.72956
> ---
> Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
>  
>  
> The main issue, here, is that somehow I don’t know how to use the proper 
> command to quantify effects of interaction, say for example ind$Stages: 
> ind$Location, Csize: ind$Stages, Csize: ind$Locations... etc. 
> 
> Thanks for spending your time and effort!
> 
> Looking forward
> 
> Truly
> 
> Mahen
> 
> 
> 
> On Fri, May 18, 2018 at 12:05 AM, Mike Collyer <mlcoll...@gmail.com 
> <mailto:mlcoll...@gmail.com>> wrote:
> OK, this error is indicating you have variables of different size - one with 
> 9 and one with 7.  This does not appear to be because of the int.first issue, 
> but rather that you introduced a new variable with a different number of 
> observations.  Also, I’m not sure what kind of result you expect with such a 
> complex model and so few observations.  It might not work, anyway, because 
> the number of independent variables exceeds the number of observations.  Just 
> a warning.
> 
> Mike
> 
> 
>> On May 17, 2018, at 2:19 PM, mahendiran mylswamy <mahenr...@gmail.com 
>> <mailto:mahenr...@gmail.com>> wrote:
>> 
>> Hi Mike
>>  
>> Thanks indeed for the quick response.
>> error messages are given below 
>>  
>> > PSfact<-procD.lm(coords ~ Csize * ind$Stages * ind$Location, data = rough, 
>> > iter = 99, RRPP = FALSE, int.first = TRUE, print.progress = FALSE)
>> Error in data.frame(df, SS, MS, Rsq = R2, F = Fs) :
>>   arguments imply differing number of rows: 9, 7
>> In addition: Warning message:
>> In SS/df : longer object length is not a multiple of shorter object length
>> 
>> 
>>  
>> 
>> 
>> 
>> On Thu, May 17, 2018 at 6:22 PM, Mike Collyer <mlcoll...@gmail.com 
>> <mailto:mlcoll...@gmail.com>> wrote:
>> Mahen,
>> 
>> Can you be specific with the error statement?  It might not me related to 
>> the int.first use but more so how you set up 

Re: [MORPHMET] help us in multifactor Manova

2018-05-17 Thread Mike Collyer
Mahen,

Can you be specific with the error statement?  It might not me related to the 
int.first use but more so how you set up your formula.  If ind$Stages and 
ind$Location are in the data frame, you are asking R to create a model matrix 
from dat in an environment within another environment.  This might cause some 
problems.

Please let us know what the error says, and we can see if it is obvious.

Cheers!
Mike

> On May 17, 2018, at 8:22 AM, mahendiran mylswamy  wrote:
> 
> Dear All.
> 
> I did a single and multi-factor Manova, through Geomorph package in R, using 
> the following commands given below, it is working fine!
> 
> Paint1factor <-procD.lm (coords ~ Csize*indep1$Stages, data = Paintgeo147, 
> iter = , RRPP = FALSE, print.progress = FALSE)
> 
> Paint3factor<-procD.lm(coords ~ Csize + ind$Stages + ind$Location, data = 
> Paintgeo3fac, iter = 99, RRPP = FALSE, print.progress = FALSE, int.first = 
> FALSE)   
> 
> HOWEVER,  when I did the multifactor Manova  to check the interaction 
> effects, using the command, though i inserted int.first = false,  somehow 
> it’s not working often shows error
> 
> Paint3factor<-procD.lm(coords ~ Csize * ind$Stages * ind$Location, data = 
> Paintgeo3fac, iter = 99, RRPP = FALSE, print.progress = FALSE, int.first = 
> FALSE)
> 
> 
> 
> apart from changing, + to *, inserting int.first, any other stuff to be added?
> 
> Please help me :(
> 
> Thanks in Advance!
> 
> Truly
> 
> Mahen
> 
> 
> 
> -- 
> ***
> M Mahendiran, Ph D
> Scientist - Division of Wetland Ecology
> Salim Ali Centre for Ornithology and Natural History (SACON)
> Anaikatti (PO), Coimbatore - 641108, TamilNadu, India
> Tel: 0422-2203100 (Ext. 122), 2203122 (Direct), Mob: 09787320901
> Fax: 0422-2657088
> http://www.sacon.in/staff/dr-m-mahendiran/ 
> 
> 
> P Please consider the environment before printing this email
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org 
> .

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.


Re: [MORPHMET] 2B PLS question

2018-01-10 Thread Mike Collyer
Angela,

The question I believe you are asking is if there is a way to have 2B-PLS work 
for n-dimensional data (n rows in your data matrix) when you have a 
s-dimensional phylogenetic covariance matrix (s x s dimensions for s species 
covariances) and s < n.  The phylo-integration function will work if you find a 
way to generalize your s x s species covariance matrix to an n x n specimen 
covariance matrix.  The n x n matrix would be certainly singular (if that is a 
concern) but not pose any computational hindrance on the phylo,integration 
function.

Doing this is not trivial.  Every cell of the n x n matrix would have to 
correspond to the species-species match in the s x s matrix.

There might also be a solution that involves centering specimen vectors, based 
on fitted values for a PGLS model with just a mean (i.e., phylo-mean-centering) 
and using standard 2B-PLS (without a covariance matrix input).

I offer these possible solutions as something one CAN do but do not wish to 
suggest this is something one SHOULD do.  I think this is sailing into 
unchartered waters with respect to statistical properties.

Cheers!
Mike

> On Jan 9, 2018, at 3:19 PM, Pena, Angela  wrote:
> 
> I am interested in using integration methods such as 2B-PLS on 3DGM data from 
> 4 different taxa with intraspecific variation. I was able to successfully run 
> 2B-PLS in geomorph considering all data points as independent. But I am 
> wondering if there is a way to consider that my taxonomic groups might be 
> following different patterns of integration. I know you can use the 
> phylo.integration function in geomorph for this, but this summarizes each 
> group to a species mean, and I am interested in the intraspecific variation. 
> Is there another way to do this without summarizing each group to a mean? Any 
> help would be greatly appreciated!
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org 
> .

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.


Re: [MORPHMET] Issue with sliders file data

2017-12-07 Thread Mike Collyer
Katherine,

It appears your issue is that the curves argument in gpagen expects a matrix 
but is receiving something in a different format (or a matrix with less than 3 
columns).  You mentioned rebuilding a TPS file and creating sliders, which 
sounds like you were using TPS software.  I’m not sure of the steps you took to 
go from that to useable data for gpagen.  However, if you hav an object called 
“sliders”, perform this diagnostic:

class(sliders)

If the response is not “matrix” or perhaps “data.frame” (which can be coerced 
into a matrix), the gpagen function is rendered inoperable. 

Hopefully this helps you diagnose the issue.

Cheers!
Mike

> On Dec 6, 2017, at 8:16 PM, Katherine Morucci  wrote:
> 
> Hi There,
> 
> I recently decided to reuse some data from an old 2D GM project on swine 
> tooth morphology. When I went to rerun some of my data using my project's 
> original code, I was unable to perform a general procrustes analysis using 
> gpagen without receiving the error message: Error in x[s[, 3], ]: subscript 
> out of bounds. I continued to receive this message after re-running all of my 
> old data. I'm not sure why there would be an issue with the dimensionality of 
> my data now, that did not exist before. I decided to rebuild a TPS file for 
> superimposition and creation of a new sliders file. This also did not work. 
> 
> 
> I'm not very experienced with R or the TPS suites, so it's likely that I'm 
> overlooking something very simple, but after spinning my wheels on this for a 
> few days I wanted to reach out for advice. Any help would be greatly 
> appreciated!
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org.

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.



Re: [MORPHMET] pairwise matrix of vector angles in R

2017-11-06 Thread Mike Collyer
ill be so whether a vector is 
> being compared to itself (i = j, hence deposited along the diagonal of the 
> matrix of angles), or to an identical vector in another row (i != j, hence 
> deposited in an off-diagonal cell). 
> 
> I think the line diag(vc) = 0 in Mike's vec.angle.matrix code is meant to do 
> the same thing for his diagonal values. 
> 
> Again, sorry for any confusion I caused. Thanks again to Mike.
> 
> Best, David
> 
> On Fri, Nov 3, 2017 at 10:46 AM, Mike Collyer <mlcoll...@gmail.com 
> <mailto:mlcoll...@gmail.com>> wrote:
> Dear David, and others,
> 
> Be careful with the code you just introduced here.  There are a couple of 
> mistakes.  First, vectors need to be unit length.  You code does not 
> transform the vectors to unit length.  Second, it’s the arccosine, the cosine 
> of the vector inner product that finds the angle.  Third, though less of an 
> error but more of a precision issue, there is no need to round 180/pi, as you 
> have done.  This is what R will return if you ask it to divide 180 by pi, but 
> the four decimal places are for your visual benefit.  By using 180/pi instead 
> of 57.2958, the results will be more precise.  This last part is not much 
> issue but the first two are big problems.
> 
> Here is a demo using two x,y vectors - 0,1 and 1,1 - which we know has an 
> angle of 45 degrees between them in a plane.  I calculate the angles using 
> your method and one with unit length vectors and arccosine of their inner 
> products, which is what we use in geomorph.
> 
> > v1 <- c(1,0)
> > v2 <- c(1,1)
> > V <- rbind(v1, v2)
> > 
> > # Katz method
> > vec.mat <- V
> > angle.mat <- matrix(NA,nrow=nrow(vec.mat),ncol=nrow(vec.mat))
> > vec.angle <- function(v1,v2){cos(t(v1)%*%v2)*57.2958}
> > for(i in 1:nrow(vec.mat)) {
> +   for(j in 1:nrow(vec.mat)) {
> + angle.mat[i, j] <- vec.angle(vec.mat[i,], vec.mat[j,])
> + }
> + }
> > 
> > angle.mat
>  [,1]  [,2]
> [1,] 30.95705  30.95705
> [2,] 30.95705 -23.84347
> > 
> > geomorph:::vec.ang.matrix(V, type = "deg")
>v1 v2
> v1  0 45
> v2 45  0
> 
> As you can see, using the cosine and not transforming the vectors to unit 
> length finds results that suggest there is some non-0 degree angle between a 
> vector and itself, in addition to the incorrect angle between vectors.
> 
> You indicated that you were patching together code, so it might be an 
> oversight, but it is an important distinction for others who might use the 
> code.  Here are the “guts” of the geomorph functions in case somebody wants 
> to reconcile the points I just made with the set-up in your code, which is 
> similar.
> 
> > geomorph:::vec.cor.matrix
> function(M) {
>   M <- as.matrix(M)
>   w <- 1/sqrt(diag(tcrossprod(M))) # weights used to make vectors unit length
>   vc = tcrossprod(M*w)# outer-product matrix finds inner-products between 
> vectors for all elements
>   options(warn = -1) # turn off warnings for diagonal elements
>   vc # vector correlations returned
> }
> 
> > geomorph:::vec.ang.matrix
> function(M, type = c("rad", "deg", "r")){
>   M <- as.matrix(M)
>   type <- match.arg(type)
>   if(type == "r") {
> vc <- vec.cor.matrix(M) # as above
>   } else {
> vc <- vec.cor.matrix(M)
> vc <- acos(vc) # finds angles for vector correlations
> diag(vc) = 0 # Make sure computational 0s are true 0s
>   }
>   if(type == "deg") vc <- vc*180/pi # turns radians into degrees
>   vc
> }
> 
> These functions have some extras that would not pertain to the general 
> solution but are meant to trap warnings and round 0s for users, but they 
> should not get in the way of understanding.
> 
> Cheers!
> Mike
> 
>> On Nov 3, 2017, at 11:33 AM, David Katz <dck...@ucdavis.edu 
>> <mailto:dck...@ucdavis.edu>> wrote:
>> 
>> I think this does it (but please check; I quickly stuck together two 
>> different pieces of code)...
>> 
>> # matrix where each vector is a row
>> vec.mat <- ...
>> #Compute group*loci matrix of mean microsatellite lengths
>> angle.mat <- matrix(NA,nrow=nrow(vec.mat),ncol=nrow(vec.mat))
>> # angle function (radians converted to degrees)
>> vec.angle <- function(v1,v2){cos(t(v1)%*%v2)*57.2958}
>> # angles-to-matrix loop
>> for(i in 1:nrow(vec.mat))
>> {
>>   for(j in 1:nrow(vec.mat))
>>   {
>> # angle bw vec1 and vec2
>> angle.mat[i, j] <- vec.angle(vec.mat[i,], vec.mat[j,])}
>> }
>> return(angle.mat)
>> 

Re: [MORPHMET] pairwise matrix of vector angles in R

2017-11-03 Thread Mike Collyer
Dear David, and others,

Be careful with the code you just introduced here.  There are a couple of 
mistakes.  First, vectors need to be unit length.  You code does not transform 
the vectors to unit length.  Second, it’s the arccosine, the cosine of the 
vector inner product that finds the angle.  Third, though less of an error but 
more of a precision issue, there is no need to round 180/pi, as you have done.  
This is what R will return if you ask it to divide 180 by pi, but the four 
decimal places are for your visual benefit.  By using 180/pi instead of 
57.2958, the results will be more precise.  This last part is not much issue 
but the first two are big problems.

Here is a demo using two x,y vectors - 0,1 and 1,1 - which we know has an angle 
of 45 degrees between them in a plane.  I calculate the angles using your 
method and one with unit length vectors and arccosine of their inner products, 
which is what we use in geomorph.

> v1 <- c(1,0)
> v2 <- c(1,1)
> V <- rbind(v1, v2)
> 
> # Katz method
> vec.mat <- V
> angle.mat <- matrix(NA,nrow=nrow(vec.mat),ncol=nrow(vec.mat))
> vec.angle <- function(v1,v2){cos(t(v1)%*%v2)*57.2958}
> for(i in 1:nrow(vec.mat)) {
+   for(j in 1:nrow(vec.mat)) {
+ angle.mat[i, j] <- vec.angle(vec.mat[i,], vec.mat[j,])
+ }
+ }
> 
> angle.mat
 [,1]  [,2]
[1,] 30.95705  30.95705
[2,] 30.95705 -23.84347
> 
> geomorph:::vec.ang.matrix(V, type = "deg")
   v1 v2
v1  0 45
v2 45  0

As you can see, using the cosine and not transforming the vectors to unit 
length finds results that suggest there is some non-0 degree angle between a 
vector and itself, in addition to the incorrect angle between vectors.

You indicated that you were patching together code, so it might be an 
oversight, but it is an important distinction for others who might use the 
code.  Here are the “guts” of the geomorph functions in case somebody wants to 
reconcile the points I just made with the set-up in your code, which is similar.

> geomorph:::vec.cor.matrix
function(M) {
  M <- as.matrix(M)
  w <- 1/sqrt(diag(tcrossprod(M))) # weights used to make vectors unit length
  vc = tcrossprod(M*w)# outer-product matrix finds inner-products between 
vectors for all elements
  options(warn = -1) # turn off warnings for diagonal elements
  vc # vector correlations returned
}

> geomorph:::vec.ang.matrix
function(M, type = c("rad", "deg", "r")){
  M <- as.matrix(M)
  type <- match.arg(type)
  if(type == "r") {
vc <- vec.cor.matrix(M) # as above
  } else {
vc <- vec.cor.matrix(M)
vc <- acos(vc) # finds angles for vector correlations
diag(vc) = 0 # Make sure computational 0s are true 0s
  }
  if(type == "deg") vc <- vc*180/pi # turns radians into degrees
  vc
}

These functions have some extras that would not pertain to the general solution 
but are meant to trap warnings and round 0s for users, but they should not get 
in the way of understanding.

Cheers!
Mike

> On Nov 3, 2017, at 11:33 AM, David Katz  wrote:
> 
> I think this does it (but please check; I quickly stuck together two 
> different pieces of code)...
> 
> # matrix where each vector is a row
> vec.mat <- ...
> #Compute group*loci matrix of mean microsatellite lengths
> angle.mat <- matrix(NA,nrow=nrow(vec.mat),ncol=nrow(vec.mat))
> # angle function (radians converted to degrees)
> vec.angle <- function(v1,v2){cos(t(v1)%*%v2)*57.2958}
> # angles-to-matrix loop
> for(i in 1:nrow(vec.mat))
> {
>   for(j in 1:nrow(vec.mat))
>   {
> # angle bw vec1 and vec2
> angle.mat[i, j] <- vec.angle(vec.mat[i,], vec.mat[j,])}
> }
> return(angle.mat)
> }
> 
> On Fri, Nov 3, 2017 at 2:38 AM, andrea cardini  > wrote:
> Dear All,
> please, does anyone know if there's an R package that, using a matrix with 
> several vectors (e.g., coefficients for allometric regressions in different 
> taxa), will compute the pairwise (all possible pairs of taxa) matrix of 
> vector angles?
> 
> Thanks in advance for any suggestion.
> Cheers
> 
> Andrea
> 
> 
> -- 
> 
> Dr. Andrea Cardini
> Researcher, Dipartimento di Scienze Chimiche e Geologiche, Università di 
> Modena e Reggio Emilia, Via Campi, 103 - 41125 Modena - Italy 
> 
> tel. 0039 059 2058472
> 
> Adjunct Associate Professor, School of Anatomy, Physiology and Human Biology, 
> The University of Western Australia, 35 Stirling Highway, Crawley WA 6009, 
> Australia
> 
> E-mail address: alcard...@gmail.com , 
> andrea.card...@unimore.it 
> WEBPAGE: https://sites.google.com/site/alcardini/home/main 
> 
> 
> FREE Yellow BOOK on Geometric Morphometrics: 
> http://www.italian-journal-of-mammalogy.it/public/journals/3/issue_241_complete_100.pdf
>  
> 

Re: [MORPHMET] pairwise matrix of vector angles in R

2017-11-03 Thread Mike Collyer
Andrea,

If you already have the matrix of coefficients, geomorph has an internal 
function that can do what you seek to do.  You can try this

geomorph:::vec.ang.matrix(myMatrix, type = “r”) # for vector correlations
geomorph:::vec.ang.matrix(myMatrix, type = “rad”) # for vector angles in radians
geomorph:::vec.ang.matrix(myMatrix, type = “deg”) # for vector angles in degrees

This function is an internal function for the advanced.procD.lm function, where 
one could test the angular differences among the allometric vectors of taxa.  
It assumes the matrix is arranges such that the rows are taxa vectors.  Please 
note that if one uses advanced.procD.lm to perform this test, the matrix of 
pairwise angles is available as part of the results.  We have an example for a 
homogeneity of slopes test in the help file that should be analogous to your 
example.  There are two ways to obtain the results

summary(HOS) # HOS is the test name; the pairwise angle matrix is part of the 
summary
HOS$obs.slopes.angles

Cheers!
Mike


> On Nov 3, 2017, at 4:38 AM, andrea cardini  wrote:
> 
> Dear All,
> please, does anyone know if there's an R package that, using a matrix with 
> several vectors (e.g., coefficients for allometric regressions in different 
> taxa), will compute the pairwise (all possible pairs of taxa) matrix of 
> vector angles?
> 
> Thanks in advance for any suggestion.
> Cheers
> 
> Andrea
> 
> 
> -- 
> 
> Dr. Andrea Cardini
> Researcher, Dipartimento di Scienze Chimiche e Geologiche, Università di 
> Modena e Reggio Emilia, Via Campi, 103 - 41125 Modena - Italy
> tel. 0039 059 2058472
> 
> Adjunct Associate Professor, School of Anatomy, Physiology and Human Biology, 
> The University of Western Australia, 35 Stirling Highway, Crawley WA 6009, 
> Australia
> 
> E-mail address: alcard...@gmail.com, andrea.card...@unimore.it
> WEBPAGE: https://sites.google.com/site/alcardini/home/main
> 
> FREE Yellow BOOK on Geometric Morphometrics: 
> http://www.italian-journal-of-mammalogy.it/public/journals/3/issue_241_complete_100.pdf
> 
> ESTIMATE YOUR GLOBAL FOOTPRINT: 
> http://www.footprintnetwork.org/en/index.php/GFN/page/calculators/
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org
> --- You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org.
> 

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.



Re: [MORPHMET] Procrustes fit

2017-10-31 Thread Mike Collyer
Andrea,

I think it is worth it to do a pedantic review of your exercise for the benefit 
of the community.

First, the differences are not data dependent - they are method dependent.  
TPSRelw uses partial Procrustes; MorphoJ uses full Procrustes superimposition.  
PCA would have the exact same variance explained by dimensions (rounding 
notwithstanding) if the two programs used the same superimposition method.  

The results are similar because the methods are similar.  Maybe what you meant 
by “data dependent” is that in another case, the different methods might lead 
to more disparate results, for which I agree.  Again, for the benefit of 
others, I think this distinction is important.

Second, I think the special characters had very little to do with the results 
from the analysis but might indeed cause problems for one program compared to 
another.  This would have more to do with each program’s programming to 
identify and deal with such things.  

Cheers!
Mike


> On Oct 31, 2017, at 12:05 PM, andrea cardini  wrote:
> 
> Dear All,
> yes, there are differences and they're data dependent but in Andrey's case 
> (as it's my experience all the times I checked with my own data) they're very 
> small.
> I gave a very quick look and it's better to check more carefully. However, in 
> the screenshot one can see the % of variance explained computed in PAST 2.17, 
> MorphoJ and TPSRelw: they're almost identical and PC1 vs PC2 in the three 
> programs (not shown) look the same except for flipping one or the other axis.
> 
> The issue may have something to do with special characters in the TPS file: I 
> could run it in TPSRelw only after converting to NTS, which removed the 
> special characters in the image names.
> 
> Cheers
> 
> Andrea
> 
> 
> 
> On 31/10/17 16:35, Adams, Dean [EEOBS] wrote:
>> Andrey,
>> To repeat: there is no reason to expect the numbers to match identically 
>> across software packages, particularly column by column (if that is what you 
>> are examining). Even if two packages perform things identically in terms of 
>> the algebra (e.g,. GPA using TpsRelw and geomorph), the numbers may differ 
>> slightly for other reasons (post-rotation of the alignment to the principal 
>> axes of the consensus, etc.).
>> What is important for downstream statistical analyses is not the individual 
>> columns of numbers found from the GPA alignment, but rather the 
>> relationships of specimens in the resultant shape space. That is, how 
>> different are shapes from one another? In the case I mentioned above, if you 
>> took the aligned specimens from TpsRelw and obtained the Procrustes 
>> (Euclidean) distance matrix from them, and did the same with the aligned 
>> specimens from geomorph, and then performed a matrix correlation, the 
>> correlation would be precisely 1.0.  This means the information is identical 
>> in the two superimpositions, even if they differ slightly in how the entire 
>> set is oriented relative to the X-Y axis.  Incidentally, in the above case 
>> one would also find a perfect correlation between distances from the 
>> GPA-aligned specimens, those shapes rotated to their principal axes, or 
>> differences in shape found from the thin-plate spline and uniform shape 
>> components taken together. For an early discussion of these issues see Rohlf 
>> 1999.
>> However, performing the procedure above where one set of GPA-aligned 
>> coordinates is from MorphoJ will not produce a perfect correlation of 1.0, 
>> as MorphoJ uses Full Procrustes superimposition. That means the perceived 
>> relationships between shapes is not being represented in the same manner: 
>> which of course is a known difference between full and partial Procrustes 
>> fitting. How much of a difference one finds between a full and partial 
>> Procrustes alignment is dataset dependent.
>> Dean
>> Dr. Dean C. Adams
>> Professor
>> Department of Ecology, Evolution, and Organismal Biology
>>Department of Statistics
>> Iowa State University
>> www.public.iastate.edu/~dcadams 
>> />  >
>> phone: 515-294-3834
>> *From:*Andrey Lissovsky [mailto:andlis...@gmail.com 
>> ]
>> *Sent:* Tuesday, October 31, 2017 10:21 AM
>> *To:* MORPHMET > >
>> *Cc:* andlis...@gmail.com ; volk...@yandex.ru 
>> 
>> *Subject:* Re: [MORPHMET] Procrustes fit
>> Thank you Dean,
>> Of course, numbers should differ. But in my case, there is no correlation 
>> between two sets. I guess that in theory the two sets should have r at least 
>> around 0.9?
>> On Tuesday, October 31, 2017 at 5:31:51 PM UTC+3, dcadams wrote:
>>Andrey,
>>It is unreasonable to expect the numbers will match perfectly
>>between these two software packages, as the way 

[MORPHMET] geomorph updates

2017-08-29 Thread Mike Collyer
Dear Colleagues,

I wish to alert you to a couple of geomorph updates, which are available with 
an installation of geomorph version 3.0.5, from Github (not yet available on 
CRAN).  These updates include:

- A new argument in advanced.procD.lm to choose among different distributions 
(SS, Rsq, or F) for estimating empirical effect sizes (Z-scores) and P-values
- An additional line of summary in ANOVA-producing functions 
(advanced.procD.lm, procD.lm, procD.pgls, and procD.allometry) to indicate from 
which distributions effect sizes and P-values were estimated
- Some small bug fixes and typo fixes

I also wish to alert you that a bug was pointed out in the previous version of 
geomorph (3.0.4) for calculation of sum of squares between models in the 
advanced.procD.lm function.  We have identified the error but have not fully 
identified the source and extent (whether under certain conditions or all 
conditions).  This error might have also been present in version 3.0.3.  
Nonetheless, the updated coding in geomorph version 3.0.5 eliminated this 
error.  (We have tested the consistency in sums of squares, effect size, and 
P-value calculations across all functions that perform them, for various sum of 
squares types and effect sizes from various statistics.)  If you rely on ANOVA 
results from advanced.procD.lm from versions 3.0.3 or 3.0.4, you might want to 
rerun analyses with version 3.0.5 to confirm results.

Thank you to Avi Koplovich for brining this issue to our attention!

As a reminder, you can download geomorph directly from Github with the 
following single line of code:

devtools::install_gituhub(“geomorphR/geomorph”, ref = “Stable”)

Warm regards from Mike, and the geomorph team!

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.



[MORPHMET] Re: number of landmarks and sample size

2017-06-03 Thread Mike Collyer
Ilker,

Philipp already defined well why - I think - this rationale is incorrect, if 
not dangerous, especially along the lines of statistical power.  As he 
indicated, using Procrustes residuals as data means a covariance matrix will 
never be full rank, owing to the invariance in size, orientation, and position 
of landmark configurations following GPA.  At most, the dimensions of the data 
space can be kp - g, where k is the number of landmark dimensions (2 or 3), p 
is the number of landmarks, and g is the number of invariant dimensions due to 
GPA (with or without sliding landmarks) or n - 1 if (kp - g) > n - 1.  As he 
also pointed out, increasing landmarks can increase the spatial resolution, 
meaning that if n - 1 is the limited number of dimensions, the distances 
between specimens can increase in the n - 1 dimensional space that results from 
increasing p.  If by “statistical power” one means an increased probability to 
reject a null hypothesis that population centroids (mean configurations) are 
the same, then increasing resolution should enhance one’s ability to reject a 
null hypothesis.

I think tying the dimensionality of the space where the hypothesis is tested to 
the number of landmarks precludes appreciating Philipp’s comment about spatial 
resolution.  

I do not wish to necessarily advocate using a limited number of PCs as shape 
data, as a rule, but one can appreciate that given a choice between two 
configurations - one with seven fixed 2D landmarks (10 PCs after GPA) and one 
with the first 10 PCs obtained from configurations with hundreds of landmarks - 
the separation of groups in the latter case might be more prominent than in the 
former, hence increasing statistical power.

Whether hundreds of landmarks are needed, or 50, or 20, or 10, or even only 7, 
or whether increasing statistical power is important, is a question that must 
be answered case by case with empirical results.  However, placing an a priori 
limit on the number of landmarks one can define because of the size of samples 
one can collect is certain way to limit statistical power, especially when 
small samples are all that’s available.

Cheers!
Mike



> On Jun 3, 2017, at 5:31 AM, Ilker ERCAN  wrote:
> 
> when we perform multivariate analysis, It must be n>p otherwise determinant 
> of Generalized variance equals to zero therefore it must be 2*l 3*l Best wishes
> Ilker ERCAN
> 
> 
> Gönderen: Norman MacLeod >
> Gönderildi: 3 Haziran 2017 Cumartesi 11:18
> Kime: MORPHMET
> Konu: Re: [MORPHMET] Re: number of landmarks and sample size
>  
> In discussions like these it would be helpful if the writer could clarify 
> whether they are referring to the concepts of biological homology, 
> topological homology or "semantic homology". These aren't the same things and 
> the whole issue of “homology” in geometric morphometrics has always seemed, 
> at least to me, to be very confused. For example, refer to the definitions of 
> “homology” and “landmark” in the Glossary on the SB Morphometrics web site. 
> Because it means different things to different specialists homology isn't a 
> term to be thrown around as lightly as morphometricians seem prone to do. 
> Imprecise and/or ambiguous usage renders the meaning of sentences difficult 
> or impossible to understand for me and I suspect confuses others as well.
> 
> Norm MacLeod
> 
> 
> > On 3 Jun 2017, at 08:53, alcardini  wrote:
> > 
> > Hi Philipp,
> > I am not worried about the number of variables (although I am not sure
> > one needs thousands of highly correlated points on a relatively simple
> > structure and seem to remember that Gunz and you suggest to start with
> > many and then reduce as appropriate).
> > 
> > Regardless of whether point homology makes sense, I am worried that
> > many users believe that semilandmarks (maybe after sliding according
> > to purely mathematical principles) are the same as "traditional
> > landmarks" with a clear one-to-one correspondence. Even saying that
> > what's "homologous" is the curve or surface is tricky, because at the
> > end of the day that curve/surface is discretized using points, shape
> > distances are based on those points and there are many ways of placing
> > points with no clear "homology" (figure 7 of Oxnard & O'Higgins,
> > 2009); indeed, in a ontogenetic study of the cranial vault, for
> > instance, where sutures may become invisible in adults and therefore
> > cannot be used as a "boundary", semilandmarks close to the sutures may
> > end up on different bones in different stages/individuals.
> > 
> > Semilandmarks are a fantastic tool, which I am happy to use when
> > needed, but they have their own limitations, which one should be aware
> > of.
> > Cheers
> > 
> > Andrea
> > 
> > 
> > 
> > On 03/06/2017, mitte...@univie.ac.at  wrote:
> >> I think a few topics get mixed up here.
> >> 
> >> Of course, 

Re: [MORPHMET] number of landmarks and sample size

2017-05-31 Thread Mike Collyer
Dear Lea,

I see others have responded to your inquiry, already.  I thought I would add an 
additional perspective.

Your question about statistical significance requires asking a follow-up 
question.  What statistical methods would you intend to use to evaluate 
“significance”?  If you are worried about the number of landmarks, your concern 
suggests you might be using parametric test statistics frequently associated 
with MANOVA, like Wilks lambda or Pilai trace.  Indeed, when using these 
statistics and converting them to approximate F values, one must have many more 
specimens than landmarks (more error degrees of freedom than shape variables, 
to be more precise), if “significance” is to be inferred from probabilities 
associated with F-distributions.  Therefore, limiting the number of landmarks 
might be a goal.

When using resampling procedures to conduct ANOVA, using fewer landmarks can 
paradoxically decrease effect sizes, as an overly simplified definition of 
shape becomes implied.  We demonstrated this in our paper: Collyer, M.L., D.J. 
Sekora, and D.C. Adams. 2015. A method for analysis of phenotypic change for 
phenotypes described by high-dimensional data. Heredity. 115: 357-365.  This is 
consistent with Andrea’s comment about quality over quantity with the caveat 
that limited quantity precludes quality.  In other words, too few landmarks 
translates to limited ability to discern shape differences, because the shape 
compared is basic.  In the paper, we used two separate landmark configurations: 
one with few landmarks and the other with the same landmarks plus sliding 
semilandmarks between fixed points, on different populations of fish.  We found 
that adding the semilandmarks increased the effect size for population 
differences and sexual dimorphism.  But if we constrained our analyses to 
parametric MANOVA for our small samples, we would have to use the simpler 
landmark configurations and live with the results.

I do not wish to suggest that adding more landmarks is better.  Overkill is 
certainly a concern.  I would suggest though that statistical power would be 
for me less of a concern than a proper characterization of the shape I wish to 
compare among samples.  If I suspect curvature is important but am afraid to 
use (semi)landmarks that would allow me to assess the curvature differences 
among groups, opting instead to use just the endpoints of a structure because I 
am worried about statistical power, then I just allowed a statistical procedure 
to take me away from the biologically relevant question I sought to address.  
Andrea is correct that quality is better than quantity, but quantity can be a 
burden in either direction (too few or too many).  Additionally, statistical 
power will vary among statistical methods.  Reconsidering methods might be as 
important as reconsidering landmarks configurations.

Regards!
Mike



> On May 4, 2017, at 5:19 AM, Lea Wolter  wrote:
> 
> Hello everyone,
> 
> I am new in the field of geometric morphometrics and have a question for my 
> bachelor thesis.
> 
> I am not sure how many landmarks I should use at most in regard to the sample 
> size. I have a sample of about 22 individuals per population or maybe a bit 
> less (using sternum and epigyne of spiders) with 5 populations. 
> I have read a paper in which they use 18 landmarks with an even lower sample 
> size (3 populations with 20 individuals, 1 with 10). But I have also heard 
> that I should use twice as much individuals per population as land marks... 
> 
> Maybe there is some mathematical formula for it to know if it would be 
> statistically significant? Could you recommend some paper?
> 
> Because of the symmetry of the epigyne I am now thinking of using just one 
> half of it for setting landmarks (so I get 5 instead of 9 landmarks). For the 
> sternum I thought about 7 or 9 landmarks, so at most I would also get 18 
> landmarks like in the paper. 
> 
> I would also like to use two type specimens in the analysis, but I have just 
> this one individual per population... would it be totally nonesens in a 
> statistical point of view?
> 
> Thanks very much for your help!
> 
> Best regards
> Lea
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org.

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.



Re: [MORPHMET] geomorph: trajectory analysis error

2017-04-11 Thread Mike Collyer
Lawrence,

I recommend first making a geomorph data frame and using it in your analyses.  
If the error persists, let us know.  These types of errors are often caused by 
R trying to figure out how to use objects in the global environment, rather 
that getting the data specifically from a list.

Hope that helps!
Mike


> On Apr 11, 2017, at 4:11 PM, Lawrence Fatica  
> wrote:
> 
> Hi all,
> I am trying to use the trajectory.analysis function in geomorph and have come 
> across an error I'm not sure how to fix. The command I am using is as follows:
> 
> trajectory.analysis(proc.coords ~ classifier$age + classifier$species + 
> classifier$age:classifier$species)
> 
> in which "proc.coords" is the output from gpagen, and "classifier" is a data 
> frame with factors denoting age classes and species.
> 
> The error I get is as follows:
> 
> Error in procD.fit(f1, data = data, pca = FALSE) : 
>   Different numbers of specimens in dependent and independent variables
> 
> I've double checked that I have Procrustes coordinates for 90 specimens, age 
> categories for 90 specimens, and species designations for 90 specimens. I've 
> gotten this to work in an earlier version of geomorph. Am I overlooking 
> something here?
> 
> Many thanks,
> Lawrence
> 
> -- 
> MORPHMET may be accessed via its webpage at http://www.morphometrics.org 
> 
> --- 
> You received this message because you are subscribed to the Google Groups 
> "MORPHMET" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to morphmet+unsubscr...@morphometrics.org 
> .

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.


Re: brief comment on non-significance Re: [MORPHMET] procD.allometry with group inclusion

2016-12-12 Thread Mike Collyer
?  The researcher knows 
better than anyone else whether this is sampling error or a biological 
phenomenon.  How to proceed should not rest solely on an outcome from a 
statistical test.  For example, if the specimens are adult organisms and 
represent large individuals within populations, one might want to discuss shape 
differences without adjusting for allometry, as well as discuss size 
differences.  A discussion of allometries in this case might obscure what is 
really most important, that maybe two populations evolved size and shape 
differences because of some ecologically meaningful reason, for example.  

So I agree with you, and more.  “No significance” or “significance” is only 
part of the evaluation.  Effect sizes and assessment of sampling errors, 
biases, or limitations should also be considered.  And no matter what, careful 
communication that reveals the researcher’s logic needs to be made in published 
articles.

Just my opinion,
Mike 

> On Dec 12, 2016, at 2:40 AM, andrea cardini <alcard...@gmail.com> wrote:
> 
> Dear All,
> 
> if I can, I'd add a brief comment on the interpretation of non-significant 
> results. I'd appreciate this to be checked by those with a proper 
> understanding and background on stats (which I haven't!).
> 
> I use Mike's sentence on non-significant slopes as an example but the issue 
> is a general one, although I find it particularly tricky in the context of 
> comparing trajectories (allometries or other) across groups. Mike wisely said 
> "approximately ("If not significant, than the slope vectors are APPROXIMATELY 
> parallel"). With permutations, one might be able to perform tests even when 
> sample sizes are small (and maybe, which is even more problematic, 
> heterogeneous across groups): then, non-significance could simply mean that 
> samples are not large enough to make strong statements (rejection of the null 
> hp) with confidence (i.e., statistical power is low). Especially with short 
> trajectories (allometries or other), it might happen to find n.s. slopes with 
> very large angles between the vectors, a case where it is probably hard to 
> conclude that allometries really are parallel. 
> That of small samples is a curse of many studies in taxonomy and evolution. 
> We've done a couple of exploratory (non-very-rigorous!) empirical analyses of 
> the effect of reducing sample sizes on means, variances, vector angles etc. 
> in geometric morphometrics (Cardini & Elton, 2007, Zoomorphol.; Cardini et 
> al., 2015, Zoomorphol.) and some, probably, most of these, literally blow up 
> when N goes down. That happened even when differences were relatively large 
> (species separated by several millions of years of independent evolution or 
> samples including domestic breeds hugely different from their wild 
> cpunterpart).
> 
> Unless one has done power analyses and/or has very large samples, I'd be 
> careful with the interpretations. There's plenty on this in the difficult 
> (for me) statistical literature. Surely one can do sophisticated power 
> analyses in R and, although probably and unfortunately not used by many, one 
> of the programs of the TPS series (TPSPower) was written by Jim exactly for 
> this aim (possibly not for power analyses in the case of MANCOVAs/vector 
> angles but certainly in the simpler case of comparisons of means).
> 
> Cheers
> 
> 
> Andrea
> 
> On 11/12/16 19:17, Mike Collyer wrote:
>> Dear Tsung,
>> 
>> The geomorph function, advanced.procD.lm, allows one to extract group slopes 
>> and model coefficients.  In fact, procD.allometry is a specialized function 
>> that uses advanced.procD.lm to perform the HOS test and then uses procD.lm 
>> to produce an ANOVA table, depending on the results of the HOS test.  It 
>> also uses the coefficients and fitted values from procD.lm to generate the 
>> various types of regression scores.  In essence, procD.allometry is a 
>> function that carries out several analyses with geomorph base functions, 
>> procD.lm and advanced.procD.lm, in a specified way.  By comparison, the 
>> output is more limited, but one can use the base functions to get much more 
>> output.
>> 
>> In advanced.procD.lm, if one specifies groups and a slope, one of the 
>> outputs is a matrix of slope vectors.  Also, one can perform pairwise tests 
>> to compare either the correlation or angle between slope vectors.
>> 
>> Regarding the operation of the HOS test, it is a permutational test that 
>> does the following: calculate the sum of squared residuals for a “full” 
>> model, shape ~ size + group + size:group and the same for a “reduced” model, 
>> shape ~ size + group.  (The sum of squared residuals is the trace of the 
>&g

Re: [MORPHMET] procD.allometry with group inclusion

2016-12-11 Thread Mike Collyer
Dear Tsung,

The geomorph function, advanced.procD.lm, allows one to extract group slopes 
and model coefficients.  In fact, procD.allometry is a specialized function 
that uses advanced.procD.lm to perform the HOS test and then uses procD.lm to 
produce an ANOVA table, depending on the results of the HOS test.  It also uses 
the coefficients and fitted values from procD.lm to generate the various types 
of regression scores.  In essence, procD.allometry is a function that carries 
out several analyses with geomorph base functions, procD.lm and 
advanced.procD.lm, in a specified way.  By comparison, the output is more 
limited, but one can use the base functions to get much more output.

In advanced.procD.lm, if one specifies groups and a slope, one of the outputs 
is a matrix of slope vectors.  Also, one can perform pairwise tests to compare 
either the correlation or angle between slope vectors.

Regarding the operation of the HOS test, it is a permutational test that does 
the following: calculate the sum of squared residuals for a “full” model, shape 
~ size + group + size:group and the same for a “reduced” model, shape ~ size + 
group.  (The sum of squared residuals is the trace of the error SSCP matrix, 
which is the same of the sum of the summed squared residuals for every shape 
variable.)The difference between these two values is the sum of squares for 
the size:group effect.  If significantly large (i.e., is found with low 
probability in many random permutations), one can conclude that the 
coefficients for this effect are collectively large enough to justify this 
effect should be retained, as the slope vectors are (at least in part) not 
parallel.  If not significant, than the slope vectors are approximately 
parallel, and the effect can be removed from the model.  A randomized residual 
permutation procedure is used, which randomizes the residual vectors of the 
reduced model in each random permutation to obtain random pseudo-values, 
repeating the sum of squares calculations each time.

Regarding your final question, yes, you are correct.  In a case like this, one 
might conclude that logCS is not a significant source of shape variation, and 
proceed with other analyses that do not include it as a covariate.  In either 
case - whether is is retained as a covariate or excluded - advanced.procD.lm 
will allow one to perform pairwise comparison tests among groups.

Cheers!
Mike

> On Dec 11, 2016, at 10:56 AM, Tsung Fei Khang <tfkh...@um.edu.my> wrote:
> 
> Dear Mike,
> 
> Many thanks for the reply!
> 
> When the procD.allometry function performs HOS test with multiple group 
> labels given, does it compute the regression vectors for each group, and then 
> tests whether the coefficients of these vectors were equal, using some 
> multivariate statistical test? If so, is there an option that outputs the 
> regression vectors? Given the high frequency of the latter being discussed in 
> the primary GM literature, it seems important to be able to extract this 
> result from the function.
> 
> Finally, on the interpretation side - If group variation is significant, but 
> not logCS, then under the model shape~size+group, does this imply that shape 
> variation is mainly explained by variation in species, and allometry is 
> absent?
> 
> Regards,
> 
> T.F.
> 
> On Thursday, December 8, 2016 at 6:08:17 PM UTC+8, Mike Collyer wrote:
> Dear Tsung,
> 
> The procD.allometry function performs two basic processes when groups are 
> provided.  First, it does a homogeneity of slopes (HOS) test.  This test 
> ascertains whether two or more groups have parallel or unique slopes (the 
> latter meaning at least one groups’s slope is different than the others).  
> The HOS test constructs two linear models: shape ~ size + group and shape ~ 
> size + group + size:group, and performs an analysis of variance to determine 
> if the size:group interaction significantly reduces the residual error 
> produced.  (Note: log(size) is a possible and default choice in this 
> analysis.)
> 
> After this test, procD.allometry then provides an analysis of variance on 
> each term in the resulting model from the HOS test.
> 
> Regarding your question, if the HOS test reveals there is significant 
> heterogeneity in slopes, the coefficients returned allow one to find the 
> unique linear equations, by group, which would be found from separate runs on 
> procD.allometry, one group at a time.  If the HOS test reveals that there is 
> not significant heterogeneity in slopes, the coefficients constrain the 
> slopes for different groups to be the same (parallel).  
> 
> Finally, and I think more to your point, the projected regression scores are 
> found by using for a (in the Xa calculation you note) the coefficients that 
> represent a common or individual slope from t

Re: [MORPHMET] procD.allometry with group inclusion

2016-12-08 Thread Mike Collyer
Dear Tsung,

The procD.allometry function performs two basic processes when groups are 
provided.  First, it does a homogeneity of slopes (HOS) test.  This test 
ascertains whether two or more groups have parallel or unique slopes (the 
latter meaning at least one groups’s slope is different than the others).  The 
HOS test constructs two linear models: shape ~ size + group and shape ~ size + 
group + size:group, and performs an analysis of variance to determine if the 
size:group interaction significantly reduces the residual error produced.  
(Note: log(size) is a possible and default choice in this analysis.)

After this test, procD.allometry then provides an analysis of variance on each 
term in the resulting model from the HOS test.

Regarding your question, if the HOS test reveals there is significant 
heterogeneity in slopes, the coefficients returned allow one to find the unique 
linear equations, by group, which would be found from separate runs on 
procD.allometry, one group at a time.  If the HOS test reveals that there is 
not significant heterogeneity in slopes, the coefficients constrain the slopes 
for different groups to be the same (parallel).  

Finally, and I think more to your point, the projected regression scores are 
found by using for a (in the Xa calculation you note) the coefficients that 
represent a common or individual slope from the linear model produced.  The 
matrix of coefficients, B, is arranged as first row = intercept, second row = 
common slope, next rows (if applicable) are coefficients for the group factor 
(essentially change the intercept, by group), and finally, the last rows are 
the coefficients for the size:group interaction (if applicable), which change 
the common slope to match each group’s unique slope.  Irrespective of the 
complexity of this B matrix, a is found as the second row.  If you run 
procD.allometry group by group, it is the same as (1) asserting that group 
slopes are unique and (2) changing a to match not the common slope, but the 
summation of the common slope and the group-specific slope adjustment.  One 
could do that, but would lose the ability to compare the groups in the same 
plot, as each group would be projected on a different axis.  

Hope that helps.

Mike


> On Dec 8, 2016, at 3:37 AM, Tsung Fei Khang  wrote:
> 
> Hi all,
> 
> I would like to use procD.allometry to study allometry in two species. 
> 
> I understand that the function returns the regression score for each specimen 
> as Reg.proj, and that the calculation is obtained as:
> s = Xa, where X is the nxp matrix of Procrustes shape variables, and a is the 
> px1 vector of regression coefficients normalized to 1. I am able to verify 
> this computation from first principles when all samples are presumed to come 
> from the same species. 
> 
> However, what happens when we are interested in more than 1 species (say 2)? 
> I could run procD.allometry by including the species labels via f2=~gps, 
> where gps gives the species labels. Is there just 1 regression vector (which 
> feels weird, since this should be species-specific), or 2? If so, how can I 
> recover both vectors? What is the difference of including f2=~gps using all 
> data, compared to if we make two separate runs of procD.allometry, one for 
> samples from species 1, and another for samples from species 2?
> 
> Thanks for any help.
> 
> Rgds,
> 
> TF
> 
> 
> 
> 
> 
> 
> " PENAFIAN: E-mel ini dan apa-apa fail yang dikepilkan bersamanya ("Mesej") 
> adalah ditujukan hanya untuk kegunaan penerima(-penerima) yang termaklum di 
> atas dan mungkin mengandungi maklumat sulit. Anda dengan ini dimaklumkan 
> bahawa mengambil apa jua tindakan bersandarkan kepada, membuat penilaian, 
> mengulang hantar, menghebah, mengedar, mencetak, atau menyalin Mesej ini atau 
> sebahagian daripadanya oleh sesiapa selain daripada penerima(-penerima) yang 
> termaklum di atas adalah dilarang. Jika anda telah menerima Mesej ini kerana 
> kesilapan, anda mesti menghapuskan Mesej ini dengan segera dan memaklumkan 
> kepada penghantar Mesej ini menerusi balasan e-mel. Pendapat-pendapat, 
> rumusan-rumusan, dan sebarang maklumat lain di dalam Mesej ini yang tidak 
> berkait dengan urusan rasmi Universiti Malaya adalah difahami sebagai bukan 
> dikeluar atau diperakui oleh mana-mana pihak yang disebut.
> 
> 
> DISCLAIMER: This e-mail and any files transmitted with it ("Message") is 
> intended only for the use of the recipient(s) named above and may contain 
> confidential information. You are hereby notified that the taking of any 
> action in reliance upon, or any review, retransmission, dissemination, 
> distribution, printing or copying of this Message or any part thereof by 
> anyone other than the intended recipient(s) is strictly prohibited. If you 
> have received this Message in error, you should delete this Message 
> immediately and advise the sender by return e-mail. Opinions, conclusions and 
> other information 

[MORPHMET] Re: [geomorph-r-package] Re: geomorph updates

2016-10-28 Thread Mike Collyer
Milena,

You must provide a branch reference

devtools:::install_github(“geomorphR/geomorph”, ref = “Stable”)

should work, but 

devtools:::install_github(“geomorphR/geomorph”)

will not work.

Mike


> On Oct 28, 2016, at 9:57 AM, Milena Stefanovic  wrote:
> 
> 
> It didn't work for me:
> 
> Downloading GitHub repo geomorphR/geomorph@master from URL 
> https://api.github.com/repos/geomorphR/geomorph/zipball/master
> Error in stop(github_error(request)) : 404: Not Found
>  (404)
> 
> I am using R Studio Version 0.99.903 for Windows.
> 
> Best regards,
> Milena
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "geomorph R package" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to geomorph-r-package+unsubscr...@googlegroups.com 
> .
> To post to this group, send email to geomorph-r-pack...@googlegroups.com 
> .
> Visit this group at https://groups.google.com/group/geomorph-r-package 
> .
> To view this discussion on the web, visit 
> https://groups.google.com/d/msgid/geomorph-r-package/3e714f7f-1f50-4cff-b0e1-d3a505b07dd7%40googlegroups.com
>  
> .
> For more options, visit https://groups.google.com/d/optout 
> .

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.


[MORPHMET] geomorph updates

2016-10-25 Thread Mike Collyer
Colleagues,

I wish to alert you to a small but important update to geomorph.  A coding 
error was found that limited the number of random permutations to 1-100 
permutations, irrespective of choice, for one particular analysis type in 
advanced.procD.lm.  This also had implications for the homogeneity of slopes 
test in procD.allometry.  This error has been fixed and an updated version of 
geomorph is available on GitHub.  We recommend, as always, that you install 
geomorph from GitHub, to ensure you have the most recent updates.

devtools:::install_github(“geomorphR/geomorph”, ref = “Stable”)

Warm regards!

Mike, and the geomorph team

-- 
MORPHMET may be accessed via its webpage at http://www.morphometrics.org
--- 
You received this message because you are subscribed to the Google Groups 
"MORPHMET" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to morphmet+unsubscr...@morphometrics.org.