The R 'save' format (as used for the saved workspace .RData) is
described in the 'R Internals' manual (section 1.8). It is intended
for R objects, and you would first have to create one[*] of those in
your other application. That seems a lot of work.
The normal way to transfer numeric data
Hi,
Is there any R-generic, OS-agnostic way to figure out what end-of-line
character is being used in a file to be processed by readLines?
Thanks, Joh
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
On Sun, 2009-11-08 at 11:12 -0800, mat7770 wrote:
I have two related variables, each with 16 points (x and Y). I am given
variance and the y-intercept. I know how to create a regression line and
find the residuals, but here is my problem. I have to make a loop that uses
the seq() function, so
At the end, although the xspline function suggested by Greg did what I
needed, it suffers from the fact that you cannot specify the number of
points of the output curve.
I then wrote my function for a generic nth grade Bezier curve interpolation.
You can find the code here if ever you're
Le dimanche 08 novembre 2009 à 19:05 -0600, Frank E Harrell Jr a écrit :
Emmanuel Charpentier wrote:
Le dimanche 08 novembre 2009 à 17:07 -0200, Iuri Gavronski a écrit :
Hi,
I would like to fit Logit models for ordered data, such as those
suggested by Greene (2003), p. 736.
Does
Laurin Müller wrote:
i installed:
libreadline-dev
libcnf-dev
but configure with no readline brings the same error.
regards,
laurin
Paul Hiemstra p.hiems...@geo.uu.nl 05.11.2009 12:38
sudo apt-get install libreadline-dev
Try running with readline turned on (the default). In regard to
Hello,
I am trying to help someone who has carried out an experiment and I'm
finding it quite difficult to understand the appropriate model to use
code it.
The response is a measurement - the amount of DNA extracted during the
experiment. There were 2 factors to be tested - one is the
You can try read.csv.sql in the sqldf package. It reads a file into an
sqlite database which it creates for you using RSQLite/sqlite thus
effectively its done outside of R. Then it extracts the portion you
specify using an sql statement and destroys the database. Omit the
sql statement if you
Sounds like a homework question ...
if y = a + bx + e, where e ~ N(0, sigma^2)
the log likelihood of the slope parameter and intercept parameters, a and b,
and variance sigma^2 given n data points y and covariates x is
f(a,b, sigma; y, x) = -0.5*n*log(2 * pi) - n*log(sigma) - 0.5 / sigma^2
Hello
Thanks to Frank Harrell for the great Design package. I noted from Pencina's
Article (Statistics in Medicine Vol 27, pp157-172) that the result obtained for
the Net Reclassification Index depends upon categorical cut-offs for risk (eg
in the paper 6%, 6-20% and 20%). I wondered how the
Hi all,
I suspect the answer to this query will be the tongue-in-cheek use a
quantum computer, but I thought my understanding might be
sufficiently limited that I'm missing a simpler option.
I'm looking for a way to cycle through all possible combinations of 2
groups of data. For 10 data points,
Dear R users,
An update to version 1.4-1 of the TraMineR package is available on the
CRAN. The package is intended for mining, describing and visualizing
sequences of states or events and more generally discrete sequential data.
This update fixes minor bugs and contains several code
Dear R community,
ff Version 2.1.1 is available on CRAN. It now supports large data.frames,
csv import/export, packed atomic datatypes and bit filtering from package
'bit' on which it depends from now.
Some performance results in seconds from test data with 78 mio rows and 7
columns on a 3 GB
hi,
I'm trying to use pvclust function (pvclust package) to perform a
cluster analysis and finding p-values for groups, however, i need to
use the squared euclidean distance method instead of euclidean
distance as distance matrix but it seems that the package only allows
the last... any ideas how
Here is an example of how I do it using the 'col.regions' parameter:
# create color palette
col.l - colorRampPalette(c('blue', 'green', 'purple', 'yellow', 'red'))(30)
levelplot(aisle ~ store * pog, storePOG, col.regions=col.l,
cuts=diff(range(as.numeric(as.character(storePOG$aisle)),
Jennifer Mollon wrote:
The response is a measurement - the amount of DNA extracted during the
experiment. There were 2 factors to be tested - one is the condition
under which the experiment took place and the other is the type of DNA
to be extracted. Each set of factors was
Emmanuel Charpentier wrote:
Le dimanche 08 novembre 2009 à 19:05 -0600, Frank E Harrell Jr a écrit :
Emmanuel Charpentier wrote:
Le dimanche 08 novembre 2009 à 17:07 -0200, Iuri Gavronski a écrit :
Hi,
I would like to fit Logit models for ordered data, such as those
suggested by Greene
Hi Johann,
Excellent. That is what i really want. A little problem is why the c.n
does not exist. Should the c.n in the memory? Sometimes, i also hope to
see c.n directly in R besides exporting. Could i see the c.n with some
function in the loops?
a-c(1:10)
b-c(rep(1,3),rep(2,3),rep(3,4))
Guido wrote
However, using a transformation matrix one can transform a model assuming
unequal variances into an equivalent model assuming equal variances. On
such a transformed model the F test or T test can be applied.
This is indeed news to me. I thought such transformations for unequal
McAllister, David wrote:
Hello
Thanks to Frank Harrell for the great Design package. I noted from Pencina's Article
(Statistics in Medicine Vol 27, pp157-172) that the result obtained for the Net
Reclassification Index depends upon categorical cut-offs for risk (eg in the paper
6%, 6-20% and
Hello all,
I am trying to fit a truncated mixture model and I wrote a driver for
flexmix following the example in the vignette, but it doesn't work for
me: it assigns all data points to one component only, e.g.:
source('bugged.R')
Call:
flexmix(formula = x ~ 1, k = 2, model =
You only have the object 'c.n' in the loop. c.1, c.2 c.3 are not
created. You have some file names by a similar name. If you want to
keep the values from the loop, use lapply:
result - lapply(seq(num), function(.num){
c.n - c[c$b == .num,, drop=FALSE] # use 'drop' in case there is
only
On Nov 9, 2009, at 8:45 AM, rusers.sh wrote:
Hi Johann,
Excellent. That is what i really want. A little problem is why the
c.n
does not exist. Should the c.n in the memory? Sometimes, i also
hope to
see c.n directly in R besides exporting. Could i see the c.n
with some
function in the
Hi everyone,
I created a four dimensional vector (dim (128,128,1,8)). This third
dimension is necessary for another function somewhere. Now I'd like to
perform a t-test on every vector of length 8 in my array on the fourth
dimension.
I'd like to obtain a new array of three dimensions with
In your model driver truncatedmodel() the fit function looks like:
z...@fit - function(x, y, w) {
para - list(mean = mean(x), sd = sd(x), lower = lower, upper= upper)
para$df - 4
with(para, eval(z...@definecomponent))
}
w are the a-posteriori probabilities and denote the weights with which
Hi !
I'd like to create
a vector
that has this kind of numeration
001
002
003
.
.
.
099
I have looked at format help page but couldn't get
any hint on how to do it.
Thanks
Anna
Anna Freni Sterrantino
Ph.D Student
Department of Statistics
University of Bologna, Italy
via Belle Arti
On Nov 9, 2009, at 9:34 AM, anna freni sterrantino wrote:
Hi !
I'd like to create
a vector
that has this kind of numeration
001
002
003
.
.
.
099
I have looked at format help page but couldn't get
any hint on how to do it.
Thanks
Anna
See ?sprintf
sprintf(%03d, 1:99)
[1] 001 002 003
On Mon, 9 Nov 2009, J. wrote:
Hi everyone,
I created a four dimensional vector (dim (128,128,1,8)). This third
dimension is necessary for another function somewhere. Now I'd like to
perform a t-test on every vector of length 8 in my array on the fourth
dimension.
Vectorize the whole
Thanks Jim - it's not elegant, but it works. Instead of using space
as a delimiter, I used \u001E - it's the unicode record delimiter
character, and I figure there's less chance of a clash with a
character in the match.
Hadley
On Sun, Nov 8, 2009 at 1:40 PM, jim holtman jholt...@gmail.com
Hi all,
I am new to R. I made this script that plots the
deviation of a skellam-distributed random variable
with respect to the skellam distribution.
I would expect to get random errors, but the plot
sistematically shows a non-random pattern (first a
peak and then a low). I don't know how to
David Stoffer describes some challenges with R's output when fitting
ARIMA models for different orders (see Issue 2 at
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm). R doesn't fit an
intercept in the model if there is any differencing. David describes a
workaround using the xreg parameter to
The Bhattacharyya distance is different from the Mahalanobis distance.
See:
http://en.wikipedia.org/wiki/Bhattacharyya_distance
There is also the Hellinger Distance and the Rao distance. For the Rao
distance, see:
http://www.scholarpedia.org/article/Fisher-Rao_metric
Jude
I'm using R 2.10.0, with zoo 1.5-8. The release notes for zoo 1.5-8
claim a bug with unique for yearmon objects has been fixed, but I'm
still having problems.
Browse[1] tmp2
[1] Dec 1996 Dec 1996
Browse[1] unique(tmp2)
[1] Dec 1996 Dec 1996
Browse[1] unique(unique(tmp2))
[1] Dec 1996
Browse[1]
Dear ALL,
I'm trying to figure out what the percentage effects are in a logistic
regression. To be more clear, I'm not interested in the effect on y of a
1-unit increase in x, but on the percentage effect on y of a 1% increase in
x (in economics this is also often called an elasticity).
For
I'm using the describe function in (Hmisc) with survey data.
When I run this command: describe(NCS,weights=NCS$w1)
I get this error message:
Error in describe(NCS, weights = NCS$w1) :
unused argument(s) (weights = c(2.49460916442049,...
Do you know why it is not using the weights argument?
Hello,
I'm trying to run a loop that will subset my data into specific sets by
regions and by race/ethnicity. I'm trying to do this fairly compactly, and
I cannot get this to work.
A simple version of the code that I am trying to run is:
names - c(white, black, asian, hispanic)
for(j in
Have a look at ?split
Hadley
On Mon, Nov 9, 2009 at 10:41 AM, agm. amur...@vt.edu wrote:
Hello,
I'm trying to run a loop that will subset my data into specific sets by
regions and by race/ethnicity. I'm trying to do this fairly compactly, and
I cannot get this to work.
A simple version
Hi!
When checking validity of a model for a large number
of experimental data I thought it to be interesting
to check the information provided by
the summary method programmatically.
Still I could not find out which method to
use to get to those data.
Example (not my real world data, but to
On Mon, 9 Nov 2009, Johann Hibschman wrote:
I'm using R 2.10.0, with zoo 1.5-8. The release notes for zoo 1.5-8
claim a bug with unique for yearmon objects has been fixed, but I'm
still having problems.
1. Please report such problems (also) to the maintainers and not (only)
to the list.
2.
Hello, not understanding the output of prcomp, I reduce the number of
components and the output continues to show cumulative 100% of the
variance explained, which can't be the case dropping from 8 components
to 3.
How do i get the output in terms of the cumulative % of the total
variance, so
Hello,
I have installed R version 2.9.2, and everything
works fine, but when attempting to install version 2.10.0
I get:
running code in 'datasets.R' ... OK
comparing 'datasets.Rout' to './datasets.Rout.save' ... OK
make[4]: Leaving directory
Also,
formatC(3,width=3,flag='0')
formatC and sprintf are both referenced in the See Also part of the
format help page.
-Don
At 9:42 AM -0600 11/9/09, Marc Schwartz wrote:
On Nov 9, 2009, at 9:34 AM, anna freni sterrantino wrote:
Hi !
I'd like to create
a vector
that has this kind of
okay, an extreme case, only 1 component, explains 100%, something weird
going on..
princ = prcomp(df[,-1],rotate=varimax,scale=TRUE,tol=.95)
summary(princ)
Importance of components:
PC1
Standard deviation 1.38
Proportion of Variance 1.00
Cumulative Proportion
Hello everyone,
I am trying to do within subjects repeated measures anova followed by the
test of sphericity (sample dataset below).
I am able to get either mixed model or linear model anova and TukeyHSD, but
have no luck with Repeated-Measures Assuming Sphericity or Separate
Sphericity Tests.
I
Look at it linearly?
On Mon, Nov 9, 2009 at 11:45 AM, zubin binab...@bellsouth.net wrote:
okay, an extreme case, only 1 component, explains 100%, something weird
going on..
princ = prcomp(df[,-1],rotate=varimax,scale=TRUE,tol=.95)
summary(princ)
Importance of components:
In the first PCA you ask how much variance of the EIGHT (!) variables is
captured by the first, second,..., eigth principal component.
In the second PCA you ask how much variance of the THREE (!) variables is
captured by the first, second, and third principal component.
Of course you need only
On Nov 9, 2009, at 12:30 PM, Dobrozemsky Georg wrote:
Hi!
When checking validity of a model for a large number
of experimental data I thought it to be interesting
to check the information provided by
the summary method programmatically.
Still I could not find out which method to
use to get
All 8 variables are still in the analysis, i am just reducing the number
of components being estimated i thought..
Example 1 component 8 variables, there is no way 1 component explains
100% of the variance of the 8 variable data set.
princ = prcomp(df[,-1],rotate=varimax,scale=TRUE,tol=.95)
[corrected dataset below]
Hello everyone,
I am trying to do within subjects repeated measures anova followed by the
test of sphericity (sample dataset below).
I am able to get either mixed model or linear model anova and TukeyHSD, but
have no luck with Repeated-Measures Assuming Sphericity or
Hi: I'm not familar with prcomp but with the principal components function
in bill revelle's psych package , one can specify the number of components
one wants to use to build the closest covariance matrix I don't know
what tol is doing in your example but it's not doing that.
Hello all:
I would like to test whether there are treatment effects on decomposition
rate, and I would like to inquire about the best, most appropriate means
using R.
I have plant decomposition data that is generally considered to follow an
exponential decay model as follows:
Wt = Wi * exp(-k *
Dear Segios,
For repeated-measures designs, the Anova() function requires a multivariate
linear model fit to the wide version of the data set, in which each of the
repeated measures appears as a separate variable. It is necessary that you
have the same occasions observed for all subjects. For
Somebody might have done this, but in fact it's not difficult to compute the
marginal effects yourself (which is the beauty of R). For a univariate
logistic regression, I illustrate two ways to compute the marginal effects
(one corresponds to the mfx, the other one to the margeff command in
Hi All,
I have a dataset with a column named Condition,
Sample Condition
1c20
2c20
3c10
4c10
5c9
6c9
7c5
8c5
9c20
10 c10
Could you let me know the fastest way to change
Mostly it is a conceptual difference. An unordered factor is one where there
is no inherent order to the levels, examples:
Color of car
Race
Nationality
Sex
State/Country of birth
Etc.
In the above, the order of the levels could be changed without it really
changing the meaning (think of the
Dear Daniel,
Thanks for your prompt reply.
Indeed I was aware of the possibility of computing at mean(x) or doing the
mean afterwards.
But what you suggest is marginal effects, right? Isn't that the effect on y
of a 1-unit increase in x (what I was not interested in)? I'm interested in
the
I don't know if it's the fastest way, but you can get there with
as.character(factor(exData$Condition, levels=c(c20, c10, c9,
c5), labels=c(AA, BB, CC, DD)))
-Ista
On Mon, Nov 9, 2009 at 2:06 PM, phoebe kong sityeek...@gmail.com wrote:
Hi All,
I have a dataset with a column named Condition,
The output of summary prcomp displays the cumulative amount of variance explained
relative to the total variance explained by the principal components PRESENT in the
object. So, it is always guaranteed to be at 100% for the last principal component
present. You can see this from the code in
Hi,
I have a dataset which has 10 numerical and 2 categorical variables (which i
code using indicator function as we usually do...one has 3 levels, other has
2 levels).I was wondering how I could use the bctrans function available in
library(alr3) to get a desired transformation on my model which
Hello -
I am trying to figure out R's transformation for interaction terms in a
linear regression.
My simple background understanding is that interaction terms are
generally calculated by multiplying the centred (0-mean) variables with
each other and then doing the regression. However, in this
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me get only the first values equal
TRUE from this vector. It means that I want to get a vector:
vec_out = TRUE TRUE TRUE TRUE
or posictions values = TRUE:
On Nov 9, 2009, at 1:44 PM, Grzes wrote:
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me get only the first values
equal
TRUE from this vector. It means that I want to get a vector:
vec_out = TRUE TRUE
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
Yes, it is the marginal effect. The marginal effect (dy/dx) is the slope of
the gradient at x. It is thus NOT for a 1 unit increase in x, but for a
marginal change in x. Remember that, for nonlinear functions, the marginal
effect is more accurate in predicting a change in y the smaller (!) the
How about
vec[1:min(which(vec==FALSE))-1]
This will return a character(0) vector if vec[1] is FALSE
Nikhil
On 9 Nov 2009, at 2:38PM, David Winsemius wrote:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
__
Use which()
vec_out - which(vec == T)
-
Justin Montemarano
Graduate Student
Kent State University - Biological Sciences
http://www.montegraphia.com
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Hi,
One way would be,
vec[ cumsum(!vec)==0 ]
HTH,
baptiste
2009/11/9 Grzes gregori...@gmail.com:
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me get only the first values equal
TRUE from this vector.
Hi all,
I'm creating a jpg file with width=1500, height=1000.
it is a graph showing 24 boxplots horizontally.
The x coordinates in the graph was not displayed in the jpg file, but it
DOES displayed in a pdf file.
Does any one know what setting I should pay attention to in order to have
the x
I should heed my own words: the 1% effect based on the marginal effect would
be
0.01* ABS(X) * margeff
I omitted the abs(x) in the last paragraph of my last email. Based on the
marginal effect, the expected change in probability would be 0.01*0.69*0.02,
which is 0.00138. This is not all too
I've run into a problem building a package with R-2.10.0 that I haven't
encountered with prior versions. Build and check work fine. However, I
encounter an error at the install phase indicating it cannot open a perl
script, which is below. For completeness, I've copied my path as well showing
Try this:
head(vec, sum(cumprod(vec)))
The positions:
which(head(vec, sum(cumprod(vec
On Mon, Nov 9, 2009 at 4:44 PM, Grzes gregori...@gmail.com wrote:
Hi !
I have a vector:
vec= TRUE TRUE TRUE TRUE FALSE FALSE FALSE FALSE TRUE TRUE FALSE
and I'm looking for a method which let me
Based on what you suggested I did the following:
1. Dataset$Sessn - as.factor(Dataset$Sessn)
2. mod - lm(cbind(Sessn==1, Sessn==2) ~ Trtmt, data=Dataset)
3. idata - data.frame(Sessn=factor(1:2))
4. Anova(mod, idata=idata, idesign=~Sessn))
ERROR: The error SSP matrix is apparently of deficient
On Mon, 9 Nov 2009, Achim Zeileis wrote:
On Mon, 9 Nov 2009, Johann Hibschman wrote:
I'm using R 2.10.0, with zoo 1.5-8. The release notes for zoo 1.5-8
claim a bug with unique for yearmon objects has been fixed, but I'm
still having problems.
1. Please report such problems (also) to the
Have you tried ezANOVA from the ez pacakge? It attempts to provide a
simple user interface to car's ANOVA (and when that fails, aov).
On Mon, Nov 9, 2009 at 1:44 PM, Sergios (Sergey) Charntikov
sergios...@gmail.com wrote:
Hello everyone,
I am trying to do within subjects repeated measures
Thanks for your ideas. They are really helpful for me to think about my
question.
Cheers,
2009/11/9 David Winsemius dwinsem...@comcast.net
On Nov 9, 2009, at 8:45 AM, rusers.sh wrote:
Hi Johann,
Excellent. That is what i really want. A little problem is why the c.n
does not exist. Should
Hi,
I have a dataset which has 10 numerical and 2 categorical variables (which i
code using indicator function as we usually do...one has 3 levels, other has
2 levels).I was wondering how I could use the bctrans function available in
library(alr3) to get a desired transformation on my model which
I've looked through ?split and run all of the code, but I am not sure that I
can use it in such a way to make it do what I need. Another suggestion was
using lists, but again, I am sure that the process can do what I need, but
I am not sure it would work with so many observations.
I might have
Hi,
I am trying to overlay a dendrogram on top of an image plot, but I run into
the problem of the nodes at the root of the dendrogram not aligning properly
with the columns on my image. A simple solution to do this is to use the
function heatmap which automatically plots the tree on the top
On 11/9/2009 3:13 PM, Doran, Harold wrote:
I've run into a problem building a package with R-2.10.0 that I haven't
encountered with prior versions. Build and check work fine. However, I
encounter an error at the install phase indicating it cannot open a perl
script, which is below. For
On 11/9/2009 1:00 PM, STEFFEN Julie wrote:
Hello,
I have a question about persp function:
I made my classical matrix with x, y and z variables and I dont know why I
obtain a 3D image with overestimate heights.
How can you tell it overestimates heights? There's no scale given.
Duncan Murdoch
Indeed, that's the solution. I normally take great care in the windows
guidelines for package build. There are many nuances, and so your continued
support on this is much appreciated.
-Original Message-
From: Duncan Murdoch [mailto:murd...@stats.uwo.ca]
Sent: Monday, November 09,
Hi,
I am trying to overlay a dendrogram on top of an image plot, but I run into
the problem of the nodes at the root of the dendrogram not aligning properly
with the columns on my image. A simple solution to do this is to use the
function heatmap which automatically plots the tree on the top
Hello R Forum users,
I was hoping someone could help me with the following problem. Consider the
following toy dataset:
AccessionSNP_CRY2SNP_FLCPhenotype
1NAA0.783143079
2BQA0.881714811
3BQA0.886619488
4AQB0.416893034
5AQB
I've built a package that contains only two functions for a test run. They are:
g - function(x){
x - x^2
class(x) - foo
x
}
print.foo - function(x, ...){
cat(This is a test:\n)
cat(x, \n)
invisible(x)
}
Simply testing these functions in
Hi all,
It is recommended in ?'if' that we use 'else' right after '}' instead
of starting a new line, but I noticed deparse() will separate '}' and
'else' when the 'if...else' clause is used inside {...} (e.g. function
body). Here is an example:
## if/else inside {}
You're looking for the assign() function.
See the first example in the help page for assign()
Something like
assign( paste( j,'.cd',i,'es.wash',sep='') , 1 )
instead of
names.cd[i].es.wash - 1
paste() assembles the name as a character string, and then assign()
assigns a value to a
Tried EZanova, no luck with my particular dataset.
Sincerely,
Sergios Charntikov (Sergey), MA
Behavioral Neuropharmacology Lab
Department of Psychology
University of Nebraska-Lincoln
Lincoln, NE 68588-0308 USA
On Mon, Nov 9, 2009 at 2:25 PM, Mike Lawrence mike.lawre...@dal.ca wrote:
On 5/11/2009, at 6:49 PM, Deepayan Sarkar wrote:
On Tue, Nov 3, 2009 at 3:57 PM, Rolf Turner
r.tur...@auckland.ac.nz wrote:
(1) Is there a (simple) way of getting cloud() to do *both*
type=p and type=h? I.e. of getting it to plot the points
as points *and* drop a perpendicular line to the
Hi Joe,
You are right about the Behrens-Fisher problem. I was merely referring to
situations where the distribution of error terms is - assumed to be - known,
and not necessarily equal for all observations.
Thanks for pointing this out.
Best wishes,
Guido
--- On Mon, 9/11/09,
Some variation of the following might be want you want:
df=data.frame(sex=sample(1:2,100,replace=T),snp.1=rnorm(100),snp.15=runif(100))
df$snp.1[df$snp.11.0]-NA; #put some missing values into the data
x=grep('^snp',names(df)); x #which columns that begin with 'snp'
apply(df[,x],2,summary)
#or
Sergios (Sergey) Charntikov wrote:
Based on what you suggested I did the following:
1. Dataset$Sessn - as.factor(Dataset$Sessn)
2. mod - lm(cbind(Sessn==1, Sessn==2) ~ Trtmt, data=Dataset)
3. idata - data.frame(Sessn=factor(1:2))
4. Anova(mod, idata=idata, idesign=~Sessn))
ERROR: The error
My R process has been killed for a few times, although the system
administrator did not do so. It happened when R attempted to allocate
a lot of memory. I'm wondering whether R would spontaneously kill
itself if it can not allocate enough memory?
__
Because print.foo is not defined if you only include the function g
in your namespace.
remko
-
Remko Duursma
Post-Doctoral Fellow
Centre for Plants and the Environment
University of Western Sydney
Hawkesbury Campus
Richmond NSW 2753
Dept of
Hi, R users,
I'm trying to transform a matrix A into B (see below). Anyone knows how to
do it in R? Thanks.
Matrix A (zone to zone travel time)
zone z1 z2 z3 z1 0 2.9 4.3 z2 2.9 0 2.5 z3 4.3 2.5 0
B:
from to time z1 z1 0 z1 z2 2.9 z1 z3 4.3 z2 z1 2.9 z2 z2 0 z2 z3 2.5 z3 z1
4.3 z3 z2 2.5
Dear Daniel,
Thanks for your reply.
Elasticity (what I am looking for) is defined as: dln(x)/dln(y) = dx/dy *
y/x (in words, the derivative of ln(x) in ln(y), which is equal to the
derivative of x in y, times the ratio between y and x)
(http://en.wikipedia.org/wiki/Elasticity_(economics)). I
-Original Message-
From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Hongwei Dong
Sent: Monday, November 09, 2009 2:24 PM
To: R-help Forum
Subject: [R] How to transform the Matrix into the way I want it ???
Hi, R users,
I'm trying to
This is not an answer to your question, but I have used SparseM
package to represent large travel time matrices efficiently.
?as.matrix.ssr
if the traveltime matrix is symmetric.
On 9 Nov 2009, at 5:24PM, Hongwei Dong wrote:
Hi, R users,
I'm trying to transform a matrix A into B (see
Hi,
When i tried to merge two datasets (multiple to multiple merge), i met a
problem on how to stop a possible loop in the sampling arguments.
###My codes are as follows.###
Hi all,
I hope that there might be some statistician out there to help me for a
possible explanation for the following simple question.
Y1~ lm(y~ t1 + t2 + t3 + t4 + t5,data=temp) # oridnary linear model
library(gam)
Y2~ gam(y~ lo(t1) +lo(t2) +lo(t3) +lo(t4) +lo(t5),data=temp) # additive
On Nov 9, 2009, at 5:24 PM, Hongwei Dong wrote:
Hi, R users,
I'm trying to transform a matrix A into B (see below). Anyone knows
how to
do it in R? Thanks.
Matrix A (zone to zone travel time)
zone z1 z2 z3 z1 0 2.9 4.3 z2 2.9 0 2.5 z3 4.3 2.5 0
ztz - read.table(textConnection( z1 z2
1 - 100 of 133 matches
Mail list logo