This is what I believe is referred to as supression in regression, where
the correlation correlation between the independent and the dependent
variable turns out to be of one sign whereas the regression coefficient
turns out to be of the opposite sign.
Read here about supression:
Hi!
We have run a linear regression model with 3 explanatory variables and get the
output below.
Does anyone know what type of test the anova model below does and why we get so
different result in terms of significant variables by the two tables?
Thanks!
/Sara
summary(model)
Call:
Hi all,
I have never worked with this kind of data before, so Please help me out
with it.
I have the following data set, in a csv file, looks like the following:
Jan 27, 2010 16:01:24,000 125 - - -
Jan 27, 2010 16:06:24,000 125 - - -
Jan 27, 2010 16:11:24,000 176 - - -
Jan 27, 2010
Xin Zhang wrote:
Hi all,
I have never worked with this kind of data before, so Please help me out
with it.
I have the following data set, in a csv file, looks like the following:
Jan 27, 2010 16:01:24,000 125 - - -
Jan 27, 2010 16:06:24,000 125 - - -
Jan 27, 2010 16:11:24,000 176 - - -
Jan
Hi, I need to create a function which generates a Binomial random number
without using the rbinom function. Do I need to use the choose function or
am I better just using a sample?
Thanks.
--
View this message in context:
http://r.789695.n4.nabble.com/Binomial-tp3516778p3516778.html
Sent from
Am 12.05.2011 10:46, schrieb blutack:
Hi, I need to create a function which generates a Binomial random number
without using the rbinom function. Do I need to use the choose function or
am I better just using a sample?
Thanks.
I think I remember other software who generates binomial data with
On 28.04.2011 09:57, Truc wrote:
Dear Anna !
I have the same problem with Window 7 - 64 bits.
If I use R 2.12.2 with snow packages 0.3-3. It works well. But with R 2.13.0
with the same snow packages.
It just hang. I start R (Run as administrator), turn off firewall ... But it
seems R .13.0
On 12-May-11 09:02:45, Alexander Engelhardt wrote:
Am 12.05.2011 10:46, schrieb blutack:
Hi, I need to create a function which generates a Binomial random
number
without using the rbinom function. Do I need to use the choose
function or
am I better just using a sample?
Thanks.
I think I
Date: Thu, 12 May 2011 10:43:59 +0200
From: jose-marcio.mart...@mines-paristech.fr
To: xzhan...@ucr.edu
CC: r-help@r-project.org
Subject: Re: [R] How to extract information from the following dataset?
Xin Zhang wrote:
Hi all,
I have
Hi there,
I am reletively new user I only dowloaded is about a week ago.
I was getting along fine but last night I tried to selected save workspace.
Since then R will not work And I really really need it.
There are two eror massages:
The first if a pop up box is Fatal error: unable to
Hi!
We have run a linear regression model with 3 explanatory variables and get the
output below.
Does anyone know what type of test the anova model below does and why we get so
different result in terms of significant variables by the two tables?
Thanks!
/Sara
summary(model)
Call:
I have the following data set, in a csv file, looks like the following:
Jan 27, 2010 16:01:24,000 125 - - -
Jan 27, 2010 16:06:24,000 125 - - -
..
The first few columns are month, day, year, time with OS3 accuracy. And the
last number is the measurement I need to extract.
I wonder if there
I second David's first reply regarding the non-utility of individual
coefficients, especially for low-order terms. Also, nonlinearity can be
quite important. Properly modeling main effects through the use of flexible
nonlinear functions can sometimes do away with the need for interaction
terms.
Hi All,
a) Is it possible to estimate the strength of seasonality in timeseries
data. Say I have monthly mean prices of an ten different assets. I decompose
the data using stl() and obtain the seasonal parameter for each month. Is it
possible to order the assets based on the strength of
Delete the file .RData in your working directory and try to start R again.
Uwe Ligges
On 12.05.2011 11:09, Bazman76 wrote:
Hi there,
I am reletively new user I only dowloaded is about a week ago.
I was getting along fine but last night I tried to selected save workspace.
Since then R
On May 11, 2011, at 11:17 PM, MikeK wrote:
I am also trying to fit data to a beta distribution.
In Ang and Tang, Probability Concepts in Engineering, 2nd Ed., page
127-9,
they describe a variant of a beta distribution with additional
parameters
than the standard beta distribution,
Hi all,
I have a point data set (SHP) with coordinates and a attribute (i.e. type of
point).
These points are scattered around a fairly big area. What i would like to do is
to find a sub-area where density of points sombined with the diversity of type
is the biggest.
Does
? subset day = x time y | time z
--- On Thu, 5/12/11, hwright heather.wri...@maine.edu wrote:
From: hwright heather.wri...@maine.edu
Subject: Re: [R] How to extract information from the following dataset?
To: r-help@r-project.org
Received: Thursday, May 12, 2011, 6:18 AM
I have the
OK I did a seach for the files and got:
.Rdata which is 206KB
Canada.Rdata which is 3kB
If I click on .Rdata I get the crash.
If I click on Canada.Rdata the system starts?
also they are stored in different places?
.Rdata is in My documents
Canada.RData is in My Documents\vars\vars\data
I
Dear R helpers,
I am raising one query regarding this Binomial thread with the sole intention
of learning something more as I understand R forum is an ocean of knowledge.
I was going through all the responses, but wondered that original query was
about generating Binomial random numbers while
So what was the final verdict on this discussion? I kind of
lost track if anyone has a minute to summarize and critique my summary below.
Apparently there were two issues, the comparison between R and Stata
was one issue and the optimum solution another. As I understand it,
there was some
Clearly, I don't understand what order() is doing and as ususl the help for
order seems to only confuse me more. For some reason I just don't follow the
examples there. I must be missing something about the data frame sort there but
what?
I originally wanted to reverse-order my data frame df1
On 05/12/2011 08:32 AM, John Kane wrote:
Clearly, I don't understand what order() is doing and as ususl the help for
order seems to only confuse me more. For some reason I just don't follow the
examples there. I must be missing something about the data frame sort there but
what?
I originally
Try
(df1[order(-df1[,2]),])
Adding the minus within the [ leaves out the column (in this case column 2).
See ?[.
HTH.
Nick Sabbe
--
ping: nick.sa...@ugent.be
link: http://biomath.ugent.be
wink: A1.056, Coupure Links 653, 9000 Gent
ring: 09/264.59.36
-- Do Not Disapprove
-Original
Am 12.05.2011 13:19, schrieb Sarah Sanchez:
Dear R helpers,
I am raising one query regarding this Binomial thread with the sole intention
of learning something more as I understand R forum is an ocean of knowledge.
I was going through all the responses, but wondered that original query was
Ah, this never would have occured to me. It's rather obvious now but of
course, I'll forget it again. Note to self: Put it in the cribsheet.
Thanks very mcuy
--- On Thu, 5/12/11, Nick Sabbe nick.sa...@ugent.be wrote:
From: Nick Sabbe nick.sa...@ugent.be
Subject: RE: [R] Simple order()
Argh. I knew it was at least partly obvious. I never have been able to read
the order() help page and understand what it is saying.
THanks very much.
By the way, to me it is counter-intuitive that the the command is
df1[order(df1[,2],decreasing=TRUE),]
For some reason I keep expecting it
On May 12, 2011, at 8:09 AM, John Kane wrote:
Argh. I knew it was at least partly obvious. I never have been able to read
the order() help page and understand what it is saying.
THanks very much.
By the way, to me it is counter-intuitive that the the command is
On Wed, 2011-05-11 at 16:11 -0700, Shi, Tao wrote:
Hi all,
I found that the two different versions of survival packages, namely 2.36-5
vs. 2.36-8 or later, give different results for coxph function. Please see
below and the data is attached. The second one was done on Linux, but
I have a combined date and time. I would like to separate them out into two
columns so I can do things such as take the mean by time across all dates.
meas-runif(435)
nTime-seq(1303975800, 1304757000, 1800)
nDateT-as.POSIXct(nTime, origin=1970-01-01)
mat1-cbind(nDateT,meas)
means1-
This is not very informative. What exactly is crashing? What is your
sessionInfo() output?
On Thu, May 12, 2011 at 7:57 AM, Bazman76 h_a_patie...@hotmail.com wrote:
OK I did a seach for the files and got:
.Rdata which is 206KB
Canada.Rdata which is 3kB
If I click on .Rdata I get the crash.
thanks for reading the manual for me :X
2011/5/12 Prof Brian Ripley rip...@stats.ox.ac.uk:
On Wed, 11 May 2011, George Locke wrote:
Hi,
I am using mtext instead of the ylab argument in some plots because i
want to move it away from the numbers in the axis. However, the text
in the X axis,
I was wondering whether it would be possible to make a method for
data.frame with sort().
I think it would be more intuitive than using the complex construction
of df[order(df$a),]
Is there any reason not to make it?
Ivan
Le 5/12/2011 15:40, Marc Schwartz a écrit :
On May 12, 2011, at 8:09
On 2011-05-12 07:16, George Locke wrote:
thanks for reading the manual for me :X
For a bit more reading, you could check out ?title.
You could replace your mtext() calls with
title(ylab='Y axis', cex.lab=1.5, line=4, font.lab=2)
Peter Ehlers
2011/5/12 Prof Brian
Hi Sara,
As the help page for anova.lm says,
Specifying a single object gives a sequential analysis of variance table.
That most likely also the answer to your second question.
The anova function can be used to compare nested models, and this
provides the flexibility to test arbitrary
anova uses sequential sums of squares (type 1), summary adjusted sums of
squares (type 3)
Take for example the first line of each output. In summary this tests
whether vole1 is needed ASSUMING volelag and year are already in the model
(conclusion would then be: it isn't needed p=.89). Whereas
Thanks a lot sir.
Regards
Sarah
--- On Thu, 5/12/11, Alexander Engelhardt a...@chaotic-neutral.de wrote:
From: Alexander Engelhardt a...@chaotic-neutral.de
Subject: Re: [R] Binomial
To: Sarah Sanchez sarah_sanche...@yahoo.com
Cc: David Winsemius dwinsem...@comcast.net, r-help@r-project.org
Here is one I wrote for the raster package. It searches a raster layer for
NA's and takes the median of the number of non NA adjacent cells determined
by neighbor count. You could turn your matrix into a raster to make it work
or change the code.
Hope you find it useful, Robert
neighbor.filter -
Hi!
I have a data set of 86 values that are non-normally distributed (counts).
The median value is 10. I want to get an estimate of the 95% confidence
interval for this median value.
I tried to use a one-sample Wiolcoxin test:
wilcox.test(Comps,mu=10,conf.int=TRUE)
and got the following
#subject: type III sum of squares - anova() Anova() AnovaM()
#R-version: 2.12.2
#Hello everyone,
#I am currently evaluating experimental data of a two factor
experiment. to illustrate de my problem I will use following #dummy
dataset: Factor T1 has 3 levels (A,B,C) and factor T2 has 2
levels
Hello,
How can i scale my time series in a way that 90% of the data is in the
-0.-9/ +0.9 range?
My approach is to first build a clean vector without those 10% far
away from the mean
require(outliers)
y-rep(c(1,1,1,1,1,9),10)
yc-y
ycc-length(y)*0.1
for(j in 1:ycc)
{
cat(Remove,j)
To run RStudio as root on Ubuntu you would just do: sudo rstudio
The packages in /usr/lib/R/library are the ones that came with the base
install of R.
J.J. Allaire
--
View this message in context:
http://r.789695.n4.nabble.com/package-update-tp3507479p3517539.html
Sent from the R help
I believe you can in this sense: use model.matrix to create X for
glmnet(X,y,...).
However, when dropping variables this will drop the indicators individually,
not per factor, which may not be what you are looking for.
Good luck,
David Katz
Axel Urbiz wrote:
Hi,
Is it possible to include
http://r.789695.n4.nabble.com/file/n3517669/R_crash.jpg
OK when I click on the .RData file I get the screen above. Also when I start
R from the desktop icon or from select if from programs I get the same
result.
The warning message is in focus and I can not move the focus to the GUI.
When I
Is it possible to get R to report the line number of an error when a script
is called with source()? I found the following post from 2009, but it's not
clear to me if this ever made it into the release version:
ws wrote:
* Is there a way to have R return the line number in a script when it errors
Hi Fabian,
You my find my discussion of types of SS helpful. My website has
been down for some time, but you can retrieve it from
http://psychology.okstate.edu/faculty/jgrice/psyc5314/SS_types.pdf
among other places.
Best,
Ista
On Thu, May 12, 2011 at 10:33 AM, Fabian fabian_ro...@gmx.de wrote:
On 12/05/2011 11:02 AM, Elliot Joel Bernstein wrote:
Is it possible to get R to report the line number of an error when a script
is called with source()? I found the following post from 2009, but it's not
clear to me if this ever made it into the release version:
It does so for parse errors.
With data.table, the following is routine :
DT[order(a)] # ascending
DT[order(-a)] # descending, if a is numeric
DT[a5,sum(z),by=c][order(-V1)] # sum of z group by c, just where a5,
then show me the largest first
DT[order(-a,b)] # order by a descending then by b ascending, if a and b are
Schatzi adele_thompson at cargill.com writes:
I have a combined date and time. I would like to separate them out into two
columns so I can do things such as take the mean by time across all dates.
meas-runif(435)
nTime-seq(1303975800, 1304757000, 1800)
nDateT-as.POSIXct(nTime,
Thanks Matthew,
I had data.table installed but totally forgot about it.
I've only used it once or twice and, IIRC, that was last year. I remember
thinking at the time that it was a very handy package but lack of need for this
sort of thinglet me forget it.
--- On Thu, 5/12/11, Matthew Dowle
I have some suggestions inline below. My biggest suggestion would be
to read the help files that came with R, especially the section
Invoking R in An Introduction to R.
On Thu, May 12, 2011 at 10:24 AM, Bazman76 h_a_patie...@hotmail.com wrote:
On May 12, 2011, at 10:24 AM, Bazman76 wrote:
http://r.789695.n4.nabble.com/file/n3517669/R_crash.jpg
OK when I click on the .RData file I get the screen above. Also when
I start
R from the desktop icon or from select if from programs I get the same
result.
Why do you even have this file
That is wonderful. Thank you.
Adele
Ken Takagi wrote:
Schatzi adele_thompson at cargill.com writes:
I have a combined date and time. I would like to separate them out into
two
columns so I can do things such as take the mean by time across all
dates.
meas-runif(435)
Hi all,
I've searched this problem and still I can't understand my results, so here
goes:
I have some time series I've imported from excel with the dates in text
format from excel. Data was imported with RODBC, sqlQuery() function.
I have these dates:
adates
[1] 01/2008 02/2008 03/2008
I have question about log2 transformation and performing mean on log2 data. I
am doing analysis for ELISA data. the OD values and the concentration
values for the standards were log2 transformed before performing the lm. the
OD values for samples were log2 transformed and coefficients of lm were
Hello!
I have problem with mediation analysis. I can do it with function
mediate, when I have one mediator. But how I can do it if I have one
independent variable and one dependent variable but 4 mediators? I
have try function mediations, but it dosen't work. If I use mediate 4
times, each
My day for dumb questions. How do I increase the type size in the Rgui console
in Windows? (R-2.13.0, Windows 7)
It looked to me that I just needed to change the font spec in Rconsole but that
does not seem to be working.
The R FAQ for Windows has a reference in Q3.4 to changing fonts, (Q5.2),
On May 12, 2011, at 15:30 , Paul Chatfield wrote:
anova uses sequential sums of squares (type 1),
Yes.
summary adjusted sums of
squares (type 3)
No. Type III SS is a considerably stranger beast.
summary() looks at the s.e. of individual coefficients. For 1 DF effects, this
is often
Greetings R world,
I know some version of the this question has been asked before, but i need
to save the output of a loop into a data frame to eventually be written to a
postgres data base with dbWriteTable. Some background. I have developed
classifications models to help identify problem
Contrary to the commonly held assumption, the Wilcoxin test does not deal with
medians in general.
There are some specific cases/assumptions where the test/interval would apply
to the median, if I remember correctly the assumptions include that the
population distribution is symmetric and the
On 12-May-11 15:15:00, 1Rnwb wrote:
I have question about log2 transformation and performing mean
on log2 data. I am doing analysis for ELISA data. the OD values
and the concentration values for the standards were log2
transformed before performing the lm. the OD values for samples
were log2
Hi
I have four groups
y1=c(1.214,1.180,1.199)
y2=c(1.614,1.710,1.867,1.479)
y3=c(1.361,1.270,1.375,1.299)
y4=c(1.459,1.335)
Is there a function that can give me the length for each, like the made up
example below?
function(length(y1:y2)
[1] 3 4 4 2
[[alternative HTML version
require(plyr)
laply(list(y1, y2, y3, y4), length)
Scott
On Thursday, May 12, 2011 at 11:50 AM, Asan Ramzan wrote:
Hi
I have four groups
y1=c(1.214,1.180,1.199)
y2=c(1.614,1.710,1.867,1.479)
y3=c(1.361,1.270,1.375,1.299)
y4=c(1.459,1.335)
Is there a function that can give me the length
sapply...
y1=c(1.214,1.180,1.199)
y2=c(1.614,1.710,1.867,1.479)
y3=c(1.361,1.270,1.375,1.299)
y4=c(1.459,1.335)
sapply(list(y1,y2,y3,y4), length)
[1] 3 4 4 2
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Asan Ramzan
Sent:
I will read the intor to R.
When closing I got a pop up asking me if I wanted to save the workspace I
just clicked yes?
here is what I got
load(C:/Documents and Settings/Hugh/My Documents/vars/vars/data)
Error in readChar(con, 5L, useBytes = TRUE) : cannot open the connection
In addition:
will delete it, just wanted to try and sort out the bug
--
View this message in context:
http://r.789695.n4.nabble.com/R-won-t-start-keeps-crashing-tp3516829p3518036.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org
Dear List,
Is there an automated way to use the survival package to generate survival
rate estimates and their standard errors? To be clear, *not *the
survivorship estimates (which are cumulative), but the survival *rate *
estimates...
Thank you in advance for any help.
Best,
Brian
One simple way:
Run R (the gui version)
Click on the Edit menu
Click on the GUI Preferences item.
Select the font, size, style, colors, etc. that you want. If you click on Save
then these become the new default. If you click on Apply, but don't save then
they will last that session but not be
On 12.05.2011 18:25, John Kane wrote:
My day for dumb questions. How do I increase the type size in the Rgui console
in Windows? (R-2.13.0, Windows 7)
It looked to me that I just needed to change the font spec in Rconsole but that
does not seem to be working.
The R FAQ for Windows has a
On 9 May 2011 at 12:57, Uwe Ligges wrote:
|
|
| On 08.05.2011 19:54, eric wrote:
| I tried to update my packages using update.packages()
|
| I got the following message:
|
| The downloaded packages are in
| ‘/tmp/RtmpyDYdTX/downloaded_packages’
| Warning in
On 12.05.2011 17:40, Assu wrote:
Hi all,
I've searched this problem and still I can't understand my results, so here
goes:
I have some time series I've imported from excel with the dates in text
format from excel. Data was imported with RODBC, sqlQuery() function.
I have these dates:
Dear R-helpers,
I'm doing a bivariate analysis with two factors, both with relatively
many levels:
1. clustering, a factor with 35 levels
2. country, a factor with 24 levels
n = 12,855
my.fit - multinom(clustering ~ country, maxit=300)
converges after 280 iterations.
I would like to get CI:s
On Thu, May 12, 2011 at 10:00 AM, Ledon, Alain alain.le...@ally.com wrote:
sapply...
y1=c(1.214,1.180,1.199)
y2=c(1.614,1.710,1.867,1.479)
y3=c(1.361,1.270,1.375,1.299)
y4=c(1.459,1.335)
sapply(list(y1,y2,y3,y4), length)
[1] 3 4 4 2
Or, if you don't want to name each object individually:
Definitely. I edited the one in the program files.
I think i saw a ref to the home file but it did not sink in.
Thanks
--- On Thu, 5/12/11, Uwe Ligges lig...@statistik.tu-dortmund.de wrote:
From: Uwe Ligges lig...@statistik.tu-dortmund.de
Subject: Re: [R] Change font size in Windows
To:
On May 12, 2011, at 18:33 , Greg Snow wrote:
Contrary to the commonly held assumption, the Wilcoxin test does not deal
with medians in general.
There are some specific cases/assumptions where the test/interval would apply
to the median, if I remember correctly the assumptions include
John -
In your example, the misclassified observations (as defined by
your predict.function) will be
kyphosis[kyphosis$Kyphosis == 'absent' prediction[,1] != 1,]
so you could start from there.
- Phil Spector
I looked at that yesterday and totally missed the font settings! I'm blaming
the new glasses.
Thank you. I'll probably do the Rconsole change but I's nice to know about this
one if I'm using R on another machine. That way I cannot mess up someone
else's setup./
--- On Thu, 5/12/11, Greg
Hi all! I need to do something really simple using do.call.
If I want to call the mean function inside do.call, how do I apply the
condition na.rm=TRUE?
So, I use do.call(mean, list(x)) where x is my data. This works fine if
there are no NAs.
Thanks,
John
[[alternative HTML version
?do.call
Second argument is a list of arguments to pass. Try do.call(mean,
list(x, na.rm = T))
On Thu, May 12, 2011 at 1:57 PM, John Kerpel john.ker...@gmail.com wrote:
Hi all! I need to do something really simple using do.call.
If I want to call the mean function inside do.call, how do I
On May 12, 2011, at 12:40 PM, Brian McLoone wrote:
Dear List,
Is there an automated way to use the survival package to generate
survival
rate estimates and their standard errors? To be clear, *not *the
survivorship estimates (which are cumulative), but the survival
*rate *
estimates...
Dear R Community,
I am pleased to announce the release of a new package called 'mvmeta', now
available on CRAN (version 0.2.0).
The package mvmeta provides some functions to perform fixed and random-effects
multivariate meta-analysis and meta-regression. This modelling framework is
Hi:
Try this:
library(sos) # install first if you don't have it already
findFn('mediation')
You should find at least a half dozen packages from which to choose,
at least three of which appear to be devoted to mediation analysis.
HTH,
Dennis
On Thu, May 12, 2011 at 8:56 AM, Mervi Virtanen
Hi
I am attempting to use plsr which is part of the pls package in r. I
amconducting analysis on datasets to identify which proteins/peptides are
responsible for the variance between sample groups (Biomarker Spoting) in a
multivariate fashion.
I have a dataset in R called
All,
When I use gdata::read.xls to read in an excel file it seems to round
the data to three decimal places and also converts the dates to factors.
Does anyone know how to 1) get more precision in the numeric data and 2)
how to prevent the dates from being converted to levels or factors? I
Assistance R,
When trying to insert data in txt format already set up R pr is the
following error:
Erro em scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,
:
linha 1 não tinha 10 elementos
I would like to know how to remedy this error pr I proceeded with my
analysis,
I have a vector with a long list of sentences that contain integers. I
would like to extract the integers in a manner such that they are
separate and manipulatable. for example:
x[i] - sally has 20 dollars in her pocket and 3 marbles
x[i+1] - 30 days ago john had a 400k house
all sentences are
Dear R-users,
I have a question on the logical argument scale in the morris-function
from the sensitivity package. Should it be set to TRUE or FALSE?
Thanks,
Chris
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Try this:
library(gsubfn)
strapply(x, \\d+, as.numeric, simplify = rbind)
On Thu, May 12, 2011 at 3:06 PM, Alon Honig honey...@gmail.com wrote:
I have a vector with a long list of sentences that contain integers. I
would like to extract the integers in a manner such that they are
separate and
Am 12.05.2011 20:14, schrieb Carlosmagno:
Assistance R,
When trying to insert data in txt format already set up R pr is the
following error:
Erro em scan(file, what, nmax, sep, dec, quote, skip, nlines, na.strings,
:
linha 1 não tinha 10 elementos
I would like to know how to remedy this
As Alexander Engelhardt says we need more information/
Please give us the code you are using and a sample of the data.
However one thing you might want to do is check that the seperetor is for the
data. You may be reading something like tab delimited data when you think it is
comma
Hello,
I have a rather complex problem... I will have to explain everything in
detail because I cannot solve it by myself...i just ran out of ideas. So
here is what I want to do:
I take quotes of two indices - SP500 and DJ. And my first aim is to
estimate coefficients of the DCC-GARCH model for
Hi,
On Wed, May 11, 2011 at 5:44 PM, Marius Hofert m_hof...@web.de wrote:
Dear expeRts,
is it possible to carry out calculations between different foreach() calls?
As for nested loops, you want to carry out calcuations not depending on the
inner
loop only once and not for each iteration of
Dear R-list helpers:
I am trying to run an ordinal logistic regression using lmer function(I
think that is the correct function, although I have test it out yet), it is
going to be a mixed model with second level as random. Instead the regular
estimate results, I need to get the marginal
I have a dataset where I have missing times (11:00 and 16:00). I would like
the outputs to include the missing time so that the final time vector looks
like realt and has the previous time's value. Ex. If meas at time 15:30 is
0.45, then the meas for time 16:00 will also be 0.45.
meas are the
On May 12, 2011, at 2:19 PM, David Winsemius wrote:
On May 12, 2011, at 12:40 PM, Brian McLoone wrote:
Dear List,
Is there an automated way to use the survival package to generate
survival
rate estimates and their standard errors? To be clear, *not *the
survivorship estimates (which
On May 12, 2011, at 4:33 PM, Schatzi wrote:
I have a dataset where I have missing times (11:00 and 16:00). I
would like
the outputs to include the missing time so that the final time
vector looks
like realt and has the previous time's value. Ex. If meas at time
15:30 is
0.45, then the
... But beware: Last observation carried forward is a widely used but
notoriously bad (biased) way to impute missing values; and, of course,
inference based on such single imputation is bogus (how bogus depends
on how much imputation, among other things, of course).
Unfortunately, dealing with
I am still working on the weights problem. If the animals do not eat (like
after sunset), then no new feed weight will be calculated and no new row will
be entered. Thus, if I just use the previous value, it should be correct for
how much cumulative feed was eaten that day up to that point.
I
Having poked the problem a couple more times it appears my issue is that the
object i save within the loop is not available after the function ends. I
have no idea why it is acting in this manner.
library(rpart)
# grow tree
fit - rpart(Kyphosis ~ Age + Number + Start,
method=class,
Hi all,
I hope you don't mind the slightly off topic email, but I'm going to
be teaching an R development master class in San Francisco on June
8-9. The basic idea of the class is to help you write better code,
focused on the mantra of do not repeat yourself. In day one you will
learn powerful
1 - 100 of 128 matches
Mail list logo