I tried looking for help but I couldn't locate the exact solution.
I have data that has several variables. I want to do several sample
simulations using only two of the variables (eg: say you have data between
people and properties owned. You only want to check how many in the samples
will come up
Hi,
I've been working with R the past few days in trying to get a proper table set
up but have not had any luck at all and the manuals I've read have not worked
for me either. I was wondering if anyone here would be able to help me in
setting this data up in R both as a table and as a
Hello,
I think to use FastICA package for microarray data clusterization,
but one question stops me: can I know how much variance explain each
component (or all components together) ?
I will be very thankful for the help.
Thanks,
Pavel
[[alternative HTML version deleted]]
Hi Laura,
Have you read this documentation
http://cran.r-project.org/doc/manuals/R-data.pdf ? If not, you should.
Specifically, see read.table() and read.csv().
When you'll have this, you can look for functions that can import xls or
xlsx spreadsheets directly, but you're not that far yet.
Well, knowing how your data looks like would definitely help!
Say your data object is called mydata, just paste the output from
dput(mydata) into the email you want to send to the list.
Ivan
Le 3/1/2011 04:18, bwaxxlo a écrit :
I tried looking for help but I couldn't locate the exact
Hi:
This is *really* ugly, but given the number of variables you have in mind,
it seems rather necessary, at least to me.
# Given an original data frame dataf, find the max ID number:
Mvars - names(dataf)[grep('^M', names(dataf))]
# Pick out the number of variables that start with M:
n -
Hi Again,
Thanks very much for your response. It seems my example got rearranged
(transposed?) after I posted it. Hopefully this example will be more
clear. I have one file (ex. sheet 1) that will have a column for
individuals (ind) and a column for the date (date). I would like to merge
this
Thanks in advance.
I'm having a trouble with data saving.
I want to run the same data which is in Ecdat library at different statistic
programs(excel, stata and matlab)
The data I want to use is
library(Ecdat)
data(Housing)
and I want to extract this data our of R as *.dta *.xls formats.
So,
for excel .. see library(xlsx)
On Tue, Mar 1, 2011 at 2:57 PM, JoonGi joo...@hanmail.net wrote:
Thanks in advance.
I'm having a trouble with data saving.
I want to run the same data which is in Ecdat library at different statistic
programs(excel, stata and matlab)
The data I want to use
Hello Everyone
I have just upgraded my PC to Windows 7 (64 bit) and I have installed R
2.12.2. R seems to be working fine.
I am having problems getting RWinEdt working with it though.
I have tried installing WinEdt 6.0 and WinEdt 5.5. But both fail with the
same error using R as 64 bit or 32
Hi,
It is a bit unclear what it is you are trying to do, as mentioned in
replies by a variety of people previously, if you are just trying to get
your data into R and label rows / columns, then
tt = matrix(c(24,134,158,9,52,61,23,72,95,12,15,27),ncol=3,byrow=T)
rownames(tt) = c(None,
On 11-02-28 11:17 PM, Jeroen Ooms wrote:
I am trying to encode arbitrary S3 objects by recursively looping over the
object and all its attributes. However, there is an unfortunate feature of
the attributes() function that is causing trouble. From the manual for
?attributes:
The names of a
On Tue, 1 Mar 2011, Santosh Srinivas wrote:
for excel .. see library(xlsx)
(That's for Excel = 2007 only, .xlsx not .xls are requested.)
Simply consult the relevant manual, 'R Data Import/Export': all of
this is covered there. There is a very new package XLConnect that is
only covered in
Hello,
for testing coefficients of lm(), I wrote the following function (with
the kind support of this mailing list):
# See Verzani, simpleR (pdf), p. 80
coeff.test - function(lm.result, idx, value) {
# idx = 1 is the intercept, idx1 the other coefficients
# null hypothesis: coeff = value
Dear R-help,
This is an example in the {Hmisc} manual under rcorr.cens function:
set.seed(1)
x - round(rnorm(200))
y - rnorm(200)
round(rcorr.cens(x, y, outx=F),4)
C IndexDxy S.D. nmissing
uncensored Relevant Pairs Concordant
Thanks in advance.
I want to derive correlations of variables in a dataset
Specifically
library(Ecdat)
data(Housing)
attach(Housing)
cor(lotsize, bathrooms)
this code results only the correlationship between two variables.
But I want to examine all the combinations of variables in this
Hello there,
I have a problem concerning bootstrapping in R - especially focusing on the
resampling part of it. I try to sum it up in a simplified way so that I would
not confuse anybody.
I have a small database consisting of 20 observations (basically numbers from 1
to 20, I mean: 1, 2, 3,
Dear all,
I am facing a problem. I am trying to install packages using a proxy, but I
am not able to call the setInternet2 function, either with the small or
capital s. What package do I have to call then? And, could there be a reason
why this does not function?
Thanks,
Marco
--
View this
Matt:
Thanks for your prompt reply.
The disparity between the bootstrap and sandwich variance estimates
derived when modeling the highly skewed outcome suggest that either
(A) the empirical robust variance estimator is underestimating the
variance or (B) the bootstrap is breaking down. The
Date: Tue, 1 Mar 2011 02:41:00 -0800
From: joo...@hanmail.net
To: r-help@r-project.org
Subject: [R] Is there any Command showing correlation of all variables in a
dataset?
Thanks in advance.
I want to derive correlations of variables
Hi,
I am facing problem for classification based on graph kernels. we calculated
the kernel between two molecule data set.Then I am confused about
classification
--
View this message in context:
http://r.789695.n4.nabble.com/Hi-tp3329650p3329650.html
Sent from the R help mailing list archive at
Date: Mon, 28 Feb 2011 19:18:18 -0800
From: kadodamb...@hotmail.com
To: r-help@r-project.org
Subject: [R] Simulation
I tried looking for help but I couldn't locate the exact solution.
I have data that has several variables. I want to do several
R 2.10
Windows Vista
Is it possible to run a variance-components analysis using lme? I looked at
Pinheiro and Bates' book and don't see code that will perform these analyses.
If the analyses can not be done using lme, what package might I try?
Thanks,
John
John David Sorkin M.D., Ph.D.
Chief,
1. Using offset(logweight) in coxph is the same as using an offset
logweight; statement in SAS, and neither is the same as case weights.
2. For a nested case control, which is what you said you have, the
strata controls who is in what risk set. No trickery with start,stop
times is needed. It
VarCorr() and then calculate percentage variance for each component from output
of VarCorr
On Tuesday, March 1, 2011 at 6:55 AM, John Sorkin wrote:
R 2.10
Windows Vista
Is it possible to run a variance-components analysis using lme? I looked at
Pinheiro and Bates' book and don't see code
http://r.789695.n4.nabble.com/file/n3329821/workdata.csv workdata.csv
The code I posted is exactly what I am running. What you need is this data.
Here is the code again.
hbwmode-mlogit.data(worktrips.csv, shape=long, choice=CHOSEN,
alt.var=ALTNUM)
hbwmode-mlogit.data(hbwtrips, shape=long,
Just to add to this (I've been looking through the archive) problem with
display unicode fonts in pdf document in R
If you can use the Cairo package to create pdf on Mac, it seems quite happy
with pushing unicode characters through (probably still font family dependant
whether it will display)
You determine the variance explained by *any* unit vector by taking its inner
product with the data points, then finding the variance of the results. In the
case of FastICA, the variance explained by the ICs collectively is exactly the
same as the variance explained by the principal components
Vikki,
The formula you used for std. error of C is not correct. C is not a simple
per-observation proportion.
SD in the output is the standard error of Dxy. Dxy = 2(C - .5). Backsolve
for std err of C.
Variation in Dxy or C comes from the usual source: sampling variability.
You can also
On Tue, Mar 1, 2011 at 4:27 AM, JoonGi joo...@hanmail.net wrote:
Thanks in advance.
I'm having a trouble with data saving.
I want to run the same data which is in Ecdat library at different statistic
programs(excel, stata and matlab)
The data I want to use is
library(Ecdat)
Hi to everyone,
if the estimate of the parameter results in 0.196 and his standard
error is 0.426, can I say that this parameter is not significant for the
model?
Thank you very much
Pippo
--
View this message in context:
Sure why not?
You do realize, do you not that no one has the slightest idea of what you are
doing?
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
--- On Tue, 3/1/11, danielepippo
Jim,
Thanks for pointing me to this article. The authors argue that the
bootstrap intervals for a robust estimator may not be as robust as the
estimator. In this context, robustness is measured by the breakdown
point, which is supposed to measure robustness to outliers. Even so, the
authors
Here is a reply by Bart:
Yes you're right (I should have taken off my glasses and looked closer).
However, the argument is essentially the same:
Suppose you have a solution with a,b,k,l. Then for any positive c, [a+b-bc]
+ [bc] + (bc) *exp(kl')exp(-kx) is also a solution, where l'
= l - log(c)/k
Hi All,
I am using the package quantreg to create a 'model' that I can then use to
predict the response variable (volume) from a larger set of explanatory
variables (environmental factors):
ie-
#model-
fit - rqss(volume~qss(env.factor1,lambda=1)+ qss(env.factor2,lambda=1),
tau = 0.9)
?cor answers that question. If Housing is a dataframe, cor(Housing) should do
it. Surprisingly, ??correlation doesn't point you to ?cor.
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of JoonGi
Sent: Tuesday, March 01, 2011 5:41
Hello there,
I have a problem concerning bootstrapping in R - especially focusing on the
resampling part of it. I try to sum it up in a simplified way so that I would
not confuse anybody.
I have a small database consisting of 20 observations (basically numbers from 1
to 20, I mean: 1, 2, 3,
Problem on flexmix when trying to apply signature developed in one model to a
new sample.
Dear
R Users, R Core Team,
I have a problem when trying to know the
classification of the tested cases using two variables with the function of
flexmix:
After importing the database and creating
Here are a couple of thoughts.
If you want to use the boot package then the statistic function you give it
just receives the bootstrapped indexes, you could test the indexes for your
condition of not more than 5 of each and if it fails return an NA instead of
computing the statistic. Then in
A simple way of sampling with replacement from 1:20, with the additional
constraint that each number can be selected at most five times is
sample(rep(1:20, 5), 20)
HTH,
Giovanni
On Tue, 2011-03-01 at 11:30 +0100, Bodnar Laszlo EB_HU wrote:
Hello there,
I have a problem concerning
hello, i tried to run playwith but :
library(playwith)
Loading required package: lattice
Loading required package: cairoDevice
Loading required package: gWidgetsRGtk2
Loading required package: gWidgets
Error in inDL(x, as.logical(local), as.logical(now), ...) :
unable to load shared object
I would like to use POSIX classes to store dates and extract components of
dates. Following the example in Spector (Data Manipulation in R), I
create a date
mydate = as. POSIXlt('2005-4-19 7:01:00')
I then successfully extract the day with the command
mydate$day
[1] 19
But when I try to
Hi, I'm new to R and stats, and I'm trying to speed up the following sum,
for (i in 1:n){
C = C + (X[i,] %o% X[i,]) # the sum of outer products - this is very
slow
according to Rprof()
}
where X is a data matrix (nrows=1000 X ncols=50), and n=1000. The sum has to
be
My previous posting seems to have got mangled. This reposts it.
On Mar 01, 2011; 03:32pm gmacfarlane wrote:
workdata.csv
The code I posted is exactly what I am running. What you need is this
data. Here is the code again.
hbwmode-mlogit.data(worktrips.csv, shape=long, choice=CHOSEN,
cor.prob function gives matrix of correlation coefficients and p-values together
### Function for calculating correlation matrix, corrs below diagonal,
### and P-values above diagonal cor.prob - function(X, dfr = nrow(X) - 2)
{ R - cor(X) above - row(R) col(R) r2 - R[above]^2 Fstat -
On Tue, Mar 1, 2011 at 5:58 PM, R Heberto Ghezzo, Dr
heberto.ghe...@mcgill.ca wrote:
hello, i tried to run playwith but :
library(playwith)
Loading required package: lattice
Loading required package: cairoDevice
Loading required package: gWidgetsRGtk2
Loading required package: gWidgets
On Tue, Mar 1, 2011 at 11:41 AM, JoonGi joo...@hanmail.net wrote:
Thanks in advance.
I want to derive correlations of variables in a dataset
Specifically
library(Ecdat)
data(Housing)
attach(Housing)
cor(lotsize, bathrooms)
this code results only the correlationship between two
Month counts from 0 in POSIXlt objects, so that April is month 3 in your
example, January being month 0.
Year counts from 1900 in POSIXlt objects, so that 2005 should return as 105
in your example.
All of the other fields in POSIXlt should return values that you might
expect them to a priori.
On 01.03.2011 12:00, Manta wrote:
Dear all,
I am facing a problem. I am trying to install packages using a proxy, but I
am not able to call the setInternet2 function, either with the small or
capital s. What package do I have to call then? And, could there be a reason
why this does not
What you're doing is breaking up the calculation of X'X
into n steps. I'm not sure what you mean by very slow:
X = matrix(rnorm(1000*50),1000,50)
n = 1000
system.time({C=matrix(0,50,50);for(i in 1:n)C = C + (X[i,] %o% X[i,])})
user system elapsed
0.096 0.008 0.104
Of course, you
Isn't the following the canonical (R-ish) way of doing this:
X = matrix(rnorm(1000*50),1000,50)
system.time({C1 = t(X) %*% X}) # Phil's example
C2 - crossprod(X) # use crossprod instead
all.equal(C1,C2)
[1] TRUE
-Original Message-
From: r-help-boun...@r-project.org
On 2011-03-01 06:38, Schatzi wrote:
Here is a reply by Bart:
Yes you're right (I should have taken off my glasses and looked closer).
However, the argument is essentially the same:
Suppose you have a solution with a,b,k,l. Then for any positive c, [a+b-bc]
+ [bc] + (bc) *exp(kl')exp(-kx) is
It says the function does not exist. The version is around 2.8, cant check
right now. Is it because it's an older version? If so, is there any way to
do it in a different way then?
--
View this message in context:
http://r.789695.n4.nabble.com/SetInternet2-RCurl-and-proxy-tp3248576p3330244.html
Hi Seth,
Thanks so much for identifying the problem and explaining everything.
I think the first solution that you suggest--make sure the schema has
well defined types--would work the best for me. But, I have one
question about how to implement it, which is more about sqlite itself.
First, I
I'm adjusting values in a list based on a couple of matrixes. One matrix
specifies the row to be taken from the adjustment matrix, while using the
aligned column values. I have an approach which works, but I might find an
approach with vectorization.
Here is code with my solution:
--
nids
On Tue, 1 Mar 2011, Seth W Bigelow wrote:
I would like to use POSIX classes to store dates and extract components of
dates. Following the example in Spector (Data Manipulation in R), I
create a date
mydate = as. POSIXlt('2005-4-19 7:01:00')
I then successfully extract the day with the
Many thanks for your response, and I am sorry I did not post correctly.
I have found dev.copy2eps() useful.
Emma
--
View this message in context:
http://r.789695.n4.nabble.com/boa-library-and-plots-tp3322508p3330299.html
Sent from the R help mailing list archive at Nabble.com.
I'm not sure that is equivalent to sampling with replacement, since if the
first draw is 1, then the probability that the next draw will be one is
4/100 instead of the 1/20 it would be in sampling with replacement. I
think the way to do this would be what Greg suggested - something like:
I have MCMC output chains A and B for example, I want to produce trace plots
for them using the boa command line...
#loads boa
boa.init()
#reads in chains
boa.chain.add(boa.importMatrix('A'), 'A')
boa.chain.add(boa.importMatrix('B'), 'B')
#plot trace plot
problems arise here!
I know I can get
Consider the following:
library(lattice)
library(latticeExtra)
temp - expand.grid(
subject = factor(paste('Subject', 1:3)),
var = factor(paste('Variable', 1:3)),
time = 1:10
)
temp$resp - rnorm(nrow(temp), 10 *
Hey thanks alot guys !!! That really speeds things up !!! I didn't know %*%
and crossprod, could operate on matrices. I think you've saved me hours in
calculation time. Thanks again.
system.time({C=matrix(0,50,50);for(i in 1:n)C = C + (X[i,] %o% X[i,])})
user system elapsed
0.450.00
No, that's not what I meant, but maybe I didn't understand the question.
What I suggested would involve sorting y, not x: sort the *distances*.
If you want to minimize the sd of a subset of numbers, you sort the numbers and
find a subset that is clumped together.
If the numbers are a function of
Hi:
On Tue, Mar 1, 2011 at 8:22 AM, Bodnar Laszlo EB_HU
laszlo.bod...@erstebank.hu wrote:
Hello there,
I have a problem concerning bootstrapping in R - especially focusing on the
resampling part of it. I try to sum it up in a simplified way so that I
would not confuse anybody.
I have a
dear R experts---
t - 1:30
f - function(t) { cat(f for, t, \n); return(2*t) }
g - function(t) { cat(g for, t, \n); return(3*t) }
s - ifelse( t%%2==0, g(t), f(t))
shows that the ifelse function actually evaluates both f() and g() for
all values first, and presumably then just picks left
Try this:
mapply(function(x, f)f(x), split(t, t %% 2), list(g, f))
On Tue, Mar 1, 2011 at 4:19 PM, ivo welch ivo...@gmail.com wrote:
dear R experts---
t - 1:30
f - function(t) { cat(f for, t, \n); return(2*t) }
g - function(t) { cat(g for, t, \n); return(3*t) }
s - ifelse( t%%2==0,
Hi, I am experimenting with using glht() from multcomp package together with
coxph(), and glad to find that glht() can work on coph object, for example:
(fit-coxph(Surv(stop, status0)~treatment,bladder1))
coxph(formula = Surv(stop, status 0) ~ treatment, data = bladder1)
Hello,
I am trying to write math-type on a plot. Due to space limitations on
the plot, I want 2 short expressions written on top of each other. It
is certainly possible to write them in two separate calls, but that
involves fine-tuning locations and a lot of trial and error (and I'm
trying
I'm trying to do this in several ways but havent had any result. Im asked to
install python, or perl etc. Can anybody suggest a direct, easy and
understandable way? Every help would be appreciated.
Thx.
--
View this message in context:
I tried creating a .xlsx file using odbcConnectExcel2007 and adding a
worksheet with sqlSave. This seems to work, I am even able to query the
worksheet, but when I try opening the file in Excel I get the following
message: Excel cannot open the file 'test.xlx' because the file format or
file
write.table() using the sep=, and file extension as .csv works great to pull
directly into excel.
?write.table
Without more detail as to the problem, it is difficult to give a more specific
answer.
Adrian
-Original Message-
From: r-help-boun...@r-project.org
On Tue, Mar 1, 2011 at 10:06 AM, chen jia chen_1...@fisher.osu.edu wrote:
Hi Seth,
Thanks so much for identifying the problem and explaining everything.
I think the first solution that you suggest--make sure the schema has
well defined types--would work the best for me. But, I have one
You can copy it with the following function and then paste into Excel...
copy = function (df, buffer.kb=256) {
write.table(df, file=paste(clipboard-,buffer.kb,sep=),
sep=\t, na='', quote=FALSE, row.names=FALSE)
}
From: maxsilva mmsil...@uc.cl
To:r-help@r-project.org
Date: 2/Mar/2011
maxsilva wrote:
Thx, but im looking for a more direct solution... my problem is very
simple, I have a dataframe and I want to create a standard excel
spreadsheet. My dataframe could be something like this
More or less the same question was answered several hours ago.
See
Or
?write.csv
which excel will import
On 1-Mar-11, at 12:17 PM, Berend Hasselman wrote:
maxsilva wrote:
Thx, but im looking for a more direct solution... my problem is very
simple, I have a dataframe and I want to create a standard excel
spreadsheet. My dataframe could be something like
On 2011-03-01 10:29, Jim Price wrote:
Consider the following:
library(lattice)
library(latticeExtra)
temp- expand.grid(
subject = factor(paste('Subject', 1:3)),
var = factor(paste('Variable', 1:3)),
time = 1:10
)
thanks, Henrique. did you mean
as.vector(t(mapply(function(x, f)f(x), split(t, ((t %% 2)==0)),
list(f, g ?
otherwise, you get a matrix.
its a good solution, but unfortunately I don't think this can be used
to redefine ifelse(cond,ift,iff) in a way that is transparent. the
ift and
...and this is where we cue the informative article on least squares
calculations in R by Doug Bates:
http://cran.r-project.org/doc/Rnews/Rnews_2004-1.pdf
HTH,
Dennis
On Tue, Mar 1, 2011 at 10:52 AM, AjayT ajaytal...@googlemail.com wrote:
Hey thanks alot guys !!! That really speeds things up
Hi:
1. expression() in plotmath ignores control characters such as \n.
2. The workaround, discussed a couple of times on this list (hence in the
archives), is to use the atop function, so try something like
plot(0:1,0:1,xaxt=n)
axis(side=1,at=.3,expression(paste(IFN-, gamma, \n, TNF-, alpha)))
You can use ^2 to get all 2 way interactions and ^3 to get all 3 way
interactions, e.g.:
lm(Sepal.Width ~ (. - Sepal.Length)^2, data=iris)
The lm.fit function is what actually does the fitting, so you could go directly
there, but then you lose the benefits of using . and ^. The Matrix package
This appeared today on the r-bloggers site and might be useful for you.
http://www.r-bloggers.com/release-of-xlconnect-0-1-3/
cheers
i
--- On Tue, 1/3/11, Steve Taylor steve.tay...@aut.ac.nz wrote:
From: Steve Taylor steve.tay...@aut.ac.nz
Subject: Re: [R] Export R dataframes to excel
To:
Help me please!
I would like to be saved a data table:
write.csv(random.t1, place, dec=,, append = T, quote = FALSE, sep = ,
qmethod = double, eol = \n, row.names=F)
It's OK!
But the rows of file
1,1,21042,-4084.87179487179,2457.66483516483,-582.275562799881
Hi:
As far as I can see, the problem has to do with the way you wrote f and g:
f(2)
f for 2
[1] 4
g(5)
g for 5
[1] 15
You output an unsaved character string with a numeric result, but the
character string is not part of the return object since it is neither saved
as a name or an attribute. If
A new version of rms is now available on CRAN for Linux and Windows (Mac
will probably be available very soon). Largest changes include latex
methods for validate.* and adding the capability to force a subset of
variables to be included in all backwards stepdown models (single model or
Thx, but im looking for a more direct solution... my problem is very simple,
I have a dataframe and I want to create a standard excel spreadsheet. My
dataframe could be something like this
id sex weight
1 M 5'8
2 F6'2
3 F5'5
4 M 5'7
5 F
Hi Greg,
Thanks for the help, it works perfectly. To answer your question,
there are 339 independent variables but only 10 will be used at one
time . So at any given line of the data set there will be 10 non zero
entries for the independent variables and the rest will be zeros.
One more
Dear R-help members,
I'd like to run a binomial logistic stepwise regression with ten explanatory
variables and as many interaction terms as R can handle. I'll come up with
the right R command sooner or later, but my real question is whether and how
the criterion for the evaluation of the
Hello Everyone,
I've been learning to use R in my spare time over the past several months. I've
read about 7-8 books on the subject. Lately I've been testing what I've learned
by trying to replicate the analyses from some of my SAS books. This helps me
make sure I know how to use R properly
Thank you, that's exactly what I needed.
--
View this message in context:
http://r.789695.n4.nabble.com/Lattice-useOuterStrips-and-axes-tp3330338p3330613.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing
I am new to R, ordered logistic regression, and polr.
The Examples section at the bottom of the help page for
polrhttp://stat.ethz.ch/R-manual/R-patched/library/MASS/html/polr.html(that
fits a logistic or probit regression model to an ordered factor
response) shows
options(contrasts =
Also posted as
http://stats.stackexchange.com/questions/7720/how-to-understand-output-from-rs-polr-function-ordered-logistic-regression
.
Also, I read section 7.3 of Modern Applied Statistics with S by Venables
and Ripley (who wrote polr?), and I can still not answer many of these
questions.
On
Dear R users,
I am having some difficulty arranging some matrices and wondered if
anyone could help out. As an example, consider the following matrix:
a - matrix(1:32, nrow = 4, ncol = 8)
a
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,]159 13 17 21 25 29
[2,]26
An ifelse-like function that only evaluated
what was needed would be fine, but it would
have to be different from ifelse itself. The
trick is to come up with a good parameterization.
E.g., how would it deal with things like
ifelse(is.na(x), mean(x, na.rm=TRUE), x)
or
ifelse(x1, log(x),
On Wed, Mar 2, 2011 at 9:36 AM, ivo welch ivo.we...@gmail.com wrote:
thanks, Henrique. did you mean
as.vector(t(mapply(function(x, f)f(x), split(t, ((t %% 2)==0)),
list(f, g ?
otherwise, you get a matrix.
its a good solution, but unfortunately I don't think this can be used
to
yikes. you are asking me too much.
thanks everybody for the information. I learned something new.
my suggestion would be for the much smarter language designers (than
I) to offer us more or less blissfully ignorant users another
vector-related construct in R. It could perhaps be named %if%
I am not sure what you are saying your problem is? Is the format
incorrect? BTW, notice that write.csv does not have a 'sep'
parameter. Maybe you should be using write.table.
On Tue, Mar 1, 2011 at 4:36 PM, Tamas Barjak tamas.barja...@gmail.com wrote:
Help me please!
I would like to be
On Tue, Mar 1, 2011 at 1:36 PM, Tamas Barjak tamas.barja...@gmail.com wrote:
Help me please!
I would like to be saved a data table:
write.csv(random.t1, place, dec=,, append = T, quote = FALSE, sep = ,
qmethod = double, eol = \n, row.names=F)
It's OK!
But the rows of file
The probability OF the residual deviance is zero. The significance level for
the residual deviance according to its asymptotic Chi-squared distribution is a
possible criterion, but a silly one. If you want to minimise that, just fit no
variables at all. That's the best you can do. If you
Dear List,
I'm now working on MLE and OSL estimators.I just noticed that the
textbook argues they are joint normal distributed.But how to prove the
conclusion?
Thanks for your time in advance!
Best,
Ning
__
R-help@r-project.org mailing list
Yes, the format is incorrect. I have already tried the write.table, but it
didn't work.
2011/3/1 jim holtman jholt...@gmail.com
I am not sure what you are saying your problem is? Is the format
incorrect? BTW, notice that write.csv does not have a 'sep'
parameter. Maybe you should be using
On Tue, Mar 1, 2011 at 4:55 PM, Darcy Webber darcy.web...@gmail.com wrote:
Dear R users,
I am having some difficulty arranging some matrices and wondered if
anyone could help out. As an example, consider the following matrix:
a - matrix(1:32, nrow = 4, ncol = 8)
a
[,1] [,2] [,3] [,4]
1 - 100 of 121 matches
Mail list logo