Dear R-list
I am having some trouble drawing a bar-graph with two groups, both of
which are stacked.
Here are my data in the dput format:
cs.not.log.bp - (
structure(c(168, 69, 16, 69, 41, 6, 148, 6, 5, 4, 7, 4, 4, 2, 7, 2, 4,
2, 4, 2, 1, 0, 2, 0), .Dim = c(4L, 6L), .Dimnames = list(
Hello,
I have a large number of time series, which needs to be transformed by
log or difference. Some of them are just processed by level (LV) without
any transformation. For that purpose, I produce a text file (.csv or .xls)
as follows:
DLN DLNDLN LV LV LV...
How can I read the
Dear expeRts,
Why does na.blank=TRUE not replace the NA's in the following LaTeX table?
x - matrix(1:72, ncol=4, nrow=8)
colnames(x) - c(gr1.sgr1, gr1.sgr2, gr2.sgr1, gr2.sgr2)
rn - apply(expand.grid(beta=c(0.25, 0.75), n=c(100, 500), d=c(10, 100))[,
3:1], 2, rmNames)
x - cbind(rn, x) # append
Ben Bolker bbolker at gmail.com writes:
Ted.Harding at wlandres.net writes:
In addition to these options, there is also a derivative-free
box-constrained optimizer (bobyqa) in the 'minqa' package (and in
an optim-like wrapper via the optimx package), and
a box-constrained Nelder-Mead
Hi
My data looks like this
dat=
X1 X2 Group
84 44 1
86 29 1
94 77 1
78 87 2
94 78 2
60 31 2
I use the formula
form = cbind(X1,X2) ~ Group
poLCA(form,dat)
But i'm getting error stating that
ALERT: some manifest variables
I have done the usual estimation of GARCH models, applied to my historical
dataset (commodities futures) with a maximum likelihood function and
selected the best model on the basis of information criteria such as Akaike
and Bayes.
Can somebody explain me please the calibration scheme for a GARCH
Hey all who have responded to this post. I am a newbie to ANOVA analysis in
R, and let me tell you- resources for us learners are scant, horrible,
unclear, imprecise.. in other words.. the worst ever. So advice like go
look it up in your classical textbook or on google is not helpful at all.
I am
Hi I am trying to use depmixS4 package. Based on the documentation, it seems
that depmix allows one to fit an HMM model based on a training data with
time-varying co-variates. However, I did not find any routines which can
help test the accuracy on the fitted HMM model on out-of-sample data.
Can
Is anybody planning to write an extensive tutorial on the ff package? I am
finding the web links highly inadequate.
Regards,
Indrajit
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list
Hey folks
I'm sorry for bringing what must be a very simple question to R-help,
but after some research I haven't been able to find a solution to my
problem.
Suppose I create a simple factor:
[code]
x-c(A,B,B,C,A)
x
[1] A B B C A
x - as.factor(x)
x
[1] A B B C A
Levels: A B C
[/code]
Now,
I have two kinds of list,
for example, one is like
t[[1]]=
1 6
2 7
3 8
4 9
5 10
...
t[[731]]
the other is
k[[1]]= 9 10
...
k[[731]]
I want to have a new list,like x
x[[1]]=
(1-9)/9(6-10)/10
(2-9)/9(7-10)/10
(3-9)/9(8-10)/10
(4-9)/9(9-10)/10
(5-9)/9(10-10)/10
...
x[[731]]
How
Quick question from a new user to R,
How do I extract my solution of a median polish matrix from R to a spreadsheet
file such as .csv?
From my reading of my guide book (R for SPSS and SAS users version 2), I
deduce that exporting a file to .csv would look like the following:
On 05/02/2012 10:47 AM, Eve Proper wrote:
I am a raw novice to R, playing around with a mini .csv dataset created in
Excel. I can read it in and the data looks OK in Excel and upon initial
inspection in R:
hikes- read.csv(/Users/eproper/Desktop/hikes.csv, header=TRUE)
print(hikes)
does exactly
Hi,
I am working on data analysis. I need to plot stacked histogram for two
Files.
File1:
1
2
3
3
4
4
File2:
4
5
6
6
7
7
7
How can i plot them on same graph?
Thanks
--
View this message in context:
http://r.789695.n4.nabble.com/How-to-plot-stacked-histogram-in-R-tp4602369.html
Sent from
Thank you very much.
--
View this message in context:
http://r.789695.n4.nabble.com/Batch-importing-data-tp4592997p4602378.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
On 05/02/2012 04:18 PM, Nicola Van Wilgen wrote:
Dear R-list
I am having some trouble drawing a bar-graph with two groups, both of
which are stacked.
Here are my data in the dput format:
cs.not.log.bp- (
structure(c(168, 69, 16, 69, 41, 6, 148, 6, 5, 4, 7, 4, 4, 2, 7, 2, 4,
2, 4, 2, 1,
On 05/02/2012 02:58 PM, Manish Gupta wrote:
Hi,
I am working on data analysis. I need to plot stacked histogram for two
Files.
File1:
1
2
3
3
4
4
File2:
4
5
6
6
7
7
7
How can i plot them on same graph?
Hi Manish,
Have a look at the third example for the barp function (plotrix).
Jim
Hi,
i want to plot empirical cumulative density functions for two variables in
one plot. For better visualizing the differences in the two cumulative curves
I'd like to log-scale the axis.
So far I found 3 possible functions to plot ecdf:
1) ecdf() from the package 'stats'. I don't know how to
If the symbols are separated by spaces, try:
scan(yourFile, what = '')
Sent from my iPad
On May 2, 2012, at 2:36, jpm miao miao...@gmail.com wrote:
Hello,
I have a large number of time series, which needs to be transformed by
log or difference. Some of them are just processed by level
My experience is the opposite -- the web is filled with introductory
statistics material, some of it quite good. If you google for
introduction to anova textbook the first hit seems to give exactly
what you are asking for. The fifth one down the list also looks good
On 02-05-2012, at 07:22, Kaushik Krishnan wrote:
Hey folks
I'm sorry for bringing what must be a very simple question to R-help,
but after some research I haven't been able to find a solution to my
problem.
Suppose I create a simple factor:
[code]
x-c(A,B,B,C,A)
x
[1] A B B C A
x -
Please read the posting guide for future questions.
I presume you mean using the vegan package? If so, then see this blog
post of mine which shows how to do something similar:
http://wp.me/pZRQ9-73
If you post more details and an example I will help further if the blog
post is not sufficient
I've run into this situation and have been able to prevent problems by using
lme4::VarCor(...)
Benjamin Nutter | Biostatistician | Quantitative Health Sciences
Cleveland Clinic | 9500 Euclid Ave. | Cleveland, OH 44195 | (216)
445-1365
-Original Message-
From:
Thanks for this good idea !
Arnaud
2012/5/1 Ted Harding ted.hard...@wlandres.net
On 01-May-2012 19:58:41 Arnaud Mosnier wrote:
Dear UseRs,
Is there a way to define the lower-upper bounds for parameters
fitted by optim using the Nelder-Mead method ?
Thanks,
Arnaud
The
Dear R Users,
I have an unbalanced panel with (on average) approx. 100 individuals over
1370 time intervals (with individual time series of different lengths,
varying between 60 and 1370 time intervals). I use the following model:
res1-plm(x~c+d+e,data=pdata_frame, effect=twoways,
On May 1, 2012, at 22:01 , meredith wrote:
I have two models, controlled by dummy variables to see if the models can be
combined into one model with similar intercepts and slopes. Has anyone tried
to conduct this type of test in R. I am utilizing the econometric idea of
hypothesis testing
Book title: Data Mining Applications with R
Publisher: Elsevier
URL: http://www.rdatamining.com/books/book2
Due date: 2nd round of chapter proposals due by 31 May 2012
Potential authors are expected to submit a 1-2 page manuscript
proposal clearly explaining the mission and concerns of the
Greetings R users,
My interest in the Q2cum score comes my endeavor to replicate SIMCAP PLS-DA
analysis in R. I use the exact same dataset. After doing the analysis in
R, I can get the exact same R2Ycum. However, the Q2cum is significantly
off. Adding the Q2cum of the 1st and 2nd component
Hi
you maybe can use mapply
If you have 2 lists
xl-list(x, x+5)
xl
[[1]]
[1] 1 2 3 4 5
[[2]]
[1] 6 7 8 9 10
yl-list(9,10)
yl
[[1]]
[1] 9
[[2]]
[1] 10
and this function
fff- function(xl,yl) (xl-yl)/yl
mapply(fff, xl, yl)
[,1] [,2]
[1,] -0.889 -0.4
[2,] -0.778
L.s.
I want to test the proportional subdistribution hazards assumption for several
competing risk regression models I fitted using the crr()-function
(cmprsk-package). I am able to plot the Schoenfeld-type residuals against
failure time, but in some cases I doubt whether the assumption holds
Hi
I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. the problem is that the database has put
a T
between the numbers and R will not accept any conversions.
this is the format that it's in now
1981-01-02T08:00
can anyone help?
Hi
I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. the problem is that the database has
put a T
between the numbers and R will not accept any conversions.
this is the format that it's in now
1981-01-02T08:00
can anyone help?
Hi, I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. I've tried as.Date, as.POSIXlt and
strptime the problem is that the database has put a T between the numbers and
R will not accept any conversions. currently it sees the date
Hello,
I'm looking for what I'm sure is a quick answer. I'm working with a data set
that looks like this:
Week Game.ID VTm VPts HTm HPts
Differential HomeWin
11 NFL_20050908_OAK@NE OAK 20 NE 30 10
FALSE
21
Or look for A handbook of Statistical Analyses using R. (Everitt and Holhorn)
available on line in pdf format.
John Kane
Kingston ON Canada
-Original Message-
From: istaz...@gmail.com
Sent: Wed, 2 May 2012 07:01:22 -0400
To: sydney.ver...@gmail.com
Subject: Re: [R] How to read
I have a variable :
*sim.var[,1]
*
Object of class SpatialPixelsDataFrame
Object of class SpatialPixels
Grid topology:
cellcentre.offset cellsize cells.dim
Xloc 0.3 0.0597
Yloc 0.1 0.05 117
SpatialPoints:
Xloc Yloc
[1,] 0.30 1.70
Deepak,
On Wed, May 2, 2012 at 7:53 AM, deepakw deepakwarr...@gmail.com wrote:
Hi I am trying to use depmixS4 package. Based on the documentation, it
seems
that depmix allows one to fit an HMM model based on a training data with
time-varying co-variates. However, I did not find any routines
Meredith:
You are clearly out of your depth. Get local help. R-help is, err,
an R help list, not a resource for remote statistical consulting.
Although, I admit, there is often some overlap.
-- Bert
On Wed, May 2, 2012 at 5:58 AM, peter dalgaard pda...@gmail.com wrote:
On May 1, 2012, at
On 02/05/2012 8:08 AM, marjolein post wrote:
Hi, I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. I've tried as.Date, as.POSIXlt and
strptime the problem is that the database has put a T between the numbers and
R will not accept
On Wed, May 2, 2012 at 4:34 AM, Jim Lemon j...@bitwrit.com.au wrote:
On 05/02/2012 10:47 AM, Eve Proper wrote:
I am a raw novice to R, playing around with a mini .csv dataset created in
Excel. I can read it in and the data looks OK in Excel and upon initial
inspection in R:
hikes-
I think you are looking for something like
aggregate(cbind(VPts, HPts) ~ VTm + HTm, data = NFL, sum)
but you should look at the examples for ?aggregate to tweak it to what
you need.
Michael
On Wed, May 2, 2012 at 7:45 AM, Daniel_55 serna.da...@gmail.com wrote:
Hello,
I'm looking for what
Quick and dirty solution is to use sub() to change the T to a space
and then use as.POSIXct as usual.
x - 1981-01-02T08:00
as.POSIXct(sub(T, , x), format = %Y-%m-%d %H:%M)
but it does look to me like R can work around the T if you give a good
format argument:
as.POSIXct(x, format =
Please don't triple post.
Michael
On Wed, May 2, 2012 at 8:08 AM, marjolein post mayo_j...@hotmail.com wrote:
Hi, I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. I've tried as.Date, as.POSIXlt and
strptime the problem is
Peter-
Maybe I have not articulately my problem clearly, I have had local help
with the statistical part just trying to figure out how to correctly program
this test. For clarity's sake, I have months worth of data, I want to
potentially combine those months into four, shall we say seasons,
On May 1, 2012, at 8:47 PM, Eve Proper wrote:
I am a raw novice to R, playing around with a mini .csv dataset
created in
Excel. I can read it in and the data looks OK in Excel and upon
initial
inspection in R:
hikes - read.csv(/Users/eproper/Desktop/hikes.csv, header=TRUE)
print(hikes)
This might be more of a question for R-SIG-Finance and followup should
probably be there, but you might get a start with the rugarch package.
Michael
On Wed, May 2, 2012 at 4:13 AM, Ivette iva_mihayl...@mail.ru wrote:
I have done the usual estimation of GARCH models, applied to my historical
It is not clear what you mean. Can you supply some sample data? Have a look
at ?dput for a handy way to supply data.
John Kane
Kingston ON Canada
-Original Message-
From: lterle...@anadolu.edu.tr
Sent: Tue, 1 May 2012 17:55:25 +
To: r-help@r-project.org
Subject: [R] Arules
As has been answered several times on rhelp ... the baseline hazard is
for a case with the mean value. It's not a meaningful case with all
factor variables. There can be no cases where fidelity3 has a
fractional value. You should be using predict() and survfit() to
display estimates for
write.csv() is definitely the right way to go so you're on track
What is str(medpolish)?
From the error message, it sounds like it's a functionIf it is,
are you sure you don't mean write.csv(medpolish( xx )) ?
Michael
On Wed, May 2, 2012 at 12:39 AM, Martin Raymond Lefebvre
mlefe...@uwo.ca
On May 2, 2012, at 6:14 AM, Johannes Radinger wrote:
Hi,
i want to plot empirical cumulative density functions for two
variables in
one plot. For better visualizing the differences in the two
cumulative curves I'd like to log-scale the axis.
So far I found 3 possible functions to plot
Try something like this. Convert the vector to character and grab the first 10
characters then convert to a date.
aa - as.factor(1981-01-02T08:00I)
aa - as.character(aa)
aa - substr(aa, 1, 10)
class(b)
John Kane
Kingston ON Canada
-Original Message-
From: mayo_j...@hotmail.com
On May 2, 2012, at 15:48 , meredith wrote:
Peter-
Maybe I have not articulately my problem clearly, I have had local help
with the statistical part just trying to figure out how to correctly program
this test. For clarity's sake, I have months worth of data, I want to
potentially combine
Dear Group,
I am working with a large dataset where I need to select for each unique id
the the unique lastpk row. Here is a sample subject:
id wtdt wt lastpk
64050256 2010-09-18 275 2010-09-16
64050256 2010-09-19 277 2010-09-18
64050256
Hi Folks,
I'm trying to get rgl.Sweave to produce plots with transparency.
However, it just seems to produce opaque plots when pdf is the output
type. Perhaps this is a known issue? I'll just use .png in the
meantime, but wanted to see about this, as I didn't see it in the
documentation (though
?tapply
?with is also useful here
as in (untested)
with(yourdataframe, tapply(lastpk, id, unique))
-- Bert
On Wed, May 2, 2012 at 7:58 AM, Ayyappa Chaturvedula
ayyapp...@gmail.com wrote:
Dear Group,
I am working with a large dataset where I need to select for each unique id
the the unique
Hi, I'm running a calculation in two ways. The first way is to employ
vectors and evaluate a function in one go. The second way is to break
down the function into pieces and combine the pieces to the final
answer.
Algebraically, they should give me the same result. But the final
vector differs
Walmes, Thank you so much!!!
I am still trying to understand all of your code but it works. I have changed
it a bit so that I get upper and lower limits for the error bar, and that the
origin starts at 0 so the negative values are plotted correctly.
barchart(Change~fTreat,groups=Process,change,
Your multistep approach corresponds to the following, which has one
more set of parentheses than you used
yy - params[4] + (params[3] - params[4])/((1 + 10^(params[1]-xx))^params[2])
In R lingo, both of your approaches are vectorized and you probably won't find
a huge difference in speed
Dear R-helpers,
I have a number of point configurations representing skull shapes, but
some of them contain superfluous points. I want to write a loop in
which each configuration is plotted and I am asked to write the
numbers of points that are superfluous. However, I don't know how to
introduce
I think readline() will do what you want. It can display a message and take
user input, assigning it to a character value so you might need as.numeric()
Michael
On May 2, 2012, at 12:08 PM, Ondřej Mikula onmik...@gmail.com wrote:
Dear R-helpers,
I have a number of point configurations
On 02/05/2012 11:00 AM, Alexander Shenkin wrote:
Hi Folks,
I'm trying to get rgl.Sweave to produce plots with transparency.
However, it just seems to produce opaque plots when pdf is the output
type. Perhaps this is a known issue? I'll just use .png in the
meantime, but wanted to see about
You might start with par(ask=TRUE) and identify().
A reproducible example might get you actual code. Also, how do you
know they're superfluous? Perhaps that knowledge can be used to
automate identification.
Sarah
On Wed, May 2, 2012 at 12:08 PM, Ondřej Mikula onmik...@gmail.com wrote:
Dear
Hi,
In a data I have two predictors and one response variable. The response
variable is categorical and fixed. Now I want to choose which predictor
would better predict the response variable. Is there a statistical test for
that?
Best,
Jing
--
Jing Tang, PhD
Senior Researcher
On 01.05.2012 19:57, Heiko Neuhaus wrote:
Hi all,
I am trying to create a list of all variable/value combinations in
environment().
When a function with unset arguments is called, the method I have been
using fails with a missing argument error. However it should be
possible to simply skip
And now we have two entirely different interpretations of the question.
I think Ondřej needs to provide a more detailed explanation of the
problem and intended result.
Sarah
On Wed, May 2, 2012 at 12:23 PM, R. Michael Weylandt
michael.weyla...@gmail.com michael.weyla...@gmail.com wrote:
I
On 02/05/2012 12:26 PM, Duncan Murdoch wrote:
On 02/05/2012 11:00 AM, Alexander Shenkin wrote:
Hi Folks,
I'm trying to get rgl.Sweave to produce plots with transparency.
However, it just seems to produce opaque plots when pdf is the output
type. Perhaps this is a known issue? I'll
On 5/2/2012 11:40 AM, Duncan Murdoch wrote:
On 02/05/2012 12:26 PM, Duncan Murdoch wrote:
On 02/05/2012 11:00 AM, Alexander Shenkin wrote:
Hi Folks,
I'm trying to get rgl.Sweave to produce plots with transparency.
However, it just seems to produce opaque plots when pdf is the output
On 02.05.2012 02:55, Ulfa Hasanah wrote:
hi all,can you help me? index moran is very difficut for me, i have data n
neighbor as enclosure:
please help me to make the program for find index moran value each
variabel,...thank very much
Which translates to:
please help me, and note that
-
On 02.05.2012 11:19, mpostje wrote:
Hi
I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. the problem is that the database has
put a T
between the numbers and R will not accept any conversions.
this is the format that it's in now
try this:
x - read.table(text = id wtdt wt lastpk
+
+ 64050256 2010-09-18 275 2010-09-16
+
+ 64050256 2010-09-19 277 2010-09-18
+
+ 64050256 2010-09-20 272 2010-09-18
+
+ 64050256 2010-09-21 277 2010-09-18, as.is = TRUE, header = TRUE)
first -
Thanks a lot for your answer!
--
test1 - function(a, b, c)
{
x - as.list(environment())
print (hi from test1!)
test2(a = a, b = b, c = c)
You are rying to pass a, b, c here and hence R tries to insert those
into the environment of test2 once it is called, you have
On Wed, May 02, 2012 at 11:42:27AM -0400, Rajarshi Guha wrote:
Hi, I'm running a calculation in two ways. The first way is to employ
vectors and evaluate a function in one go. The second way is to break
down the function into pieces and combine the pieces to the final
answer.
Algebraically,
Hi,
I'm trying to build a Forest Plot using the second and fourth columns in
the table (test.csv) below. My code is the following:
curated - data.frame(test.csv)
tmp - curated$coef
tmp1 - curated$se_coef
plt - metaplot(tmp, tmp1, xlim = c(-.45, .45))
I keep getting the following error at the
On 02/05/2012 12:59 PM, Heiko Neuhaus wrote:
Thanks a lot for your answer!
--
test1- function(a, b, c)
{
x- as.list(environment())
print (hi from test1!)
test2(a = a, b = b, c = c)
You are rying to pass a, b, c here and hence R tries to insert those
Dear all:
Is there a way to add text to the margins or outer margins of a mosaic plot
using the vcd package? I understand the margins argument to mosaic, but I don't
know how to add text to that.
I'd like to add a caption to a plot. If possible, I'd like to know how to set
the font and size
Dear R-Helpers,
I'm working with immunoassay data and 5PL logistic model. I wanted to
experiment with different forms of weighting and parameter selection,
which is not possible in instrument software, so I turned to R.
I am using R 2.14.2 under Win7 64bit, and the 'nls' library to fit the
Plot the data. You're clearly overfitting.
(If you don't know what this means or why it causes the problems you
see, try a statistical help list or consult your local statistician).
-- Bert
On Wed, May 2, 2012 at 12:32 PM, Michal Figurski
figur...@mail.med.upenn.edu wrote:
Dear R-Helpers,
Thank you very much for your suggestion.
f - function(a,b,c) {
names - ls(environment()) # get all the names
result - list()
for (n in names) {
if (!do.call(missing, list(as.name(n
result[n] - get(n)
}
result
}
I have already figured out a very similar solution using for/eval that
R-helpers:
What would be the absolute fastest way to make a large empty file (e.g.
filled with all zeroes) on disk, given a byte size and a given number
number of empty values. I know I can use writeBin, but the object in
this case may be far too large to store in main memory. I'm asking
Look at the man page for dd (assuming you are on *nix)
A quick google will get you a command to try. I'm not at my desk or I would as
well.
Jeff
Jeffrey Ryan|Founder|jeffrey.r...@lemnica.com
www.lemnica.com
On May 2, 2012, at 5:23 PM, Jonathan Greenberg j...@illinois.edu
An R solution is:
allocateFile - function(pathname, nbrOfBytes) {
con - file(pathname, open=wb);
on.exit(close(con));
seek(con, where=nbrOfBytes-1L, origin=start, rw=write);
writeBin(as.raw(0), con=con);
invisible(pathname);
} # allocateFile()
allocateFile(foo.bin, nbrOfBytes=985403)
On 12-05-02 5:20 PM, Heiko Neuhaus wrote:
Thank you very much for your suggestion.
f- function(a,b,c) {
names- ls(environment()) # get all the names
result- list()
for (n in names) {
if (!do.call(missing, list(as.name(n
result[n]- get(n)
}
result
}
I have already figured out a very
Something like:
http://markus.revti.com/2007/06/creating-empty-file-with-specified-size/
Is one way I know of.
Jeff
Jeffrey Ryan|Founder|jeffrey.r...@lemnica.com
www.lemnica.com
On May 2, 2012, at 5:23 PM, Jonathan Greenberg j...@illinois.edu wrote:
R-helpers:
What
It works for me with your data:
dat-read.table(/tmp/foo.txt,header=TRUE)
metaplot(a$coef,a$se_coef)
It has boxes of size zero for the point estimates, but that's because
you give the standard error as zero for the second estimate, which
implies all the other boxes should be infinitely smaller.
Addendum to my first post:
Since I wish to understand what plm does to my data, I tried to manually
calculate the demeaned values and use OLS. See below how far I got with the
Grunfeld data; formula's are based on Greene's Econometric Analysis.
Obviously, I am missing at least one important
Thanks everyone for your helpful responses. I looked at the csv file in a
text editor and saw no spaces or non-numerical characters (other than
periods as decimals) outside of the header. str() says me that the
variables are either num or int.
David was spot-on; I was trying
storage.mode(~miles)
Hello list
Is there a way of identifying from within R whether a script has been source(d)
from Rgui.exe or via Rscript.exe in batch mode?
For the code I have I use the commandArgs() function to pick up command line
args when running in batch mode via Rscript.exe
However I like to get the code
interactive() does not do exactly what you
ask for, but may be close enough. It returns
FALSE when run from Rscript and TRUE from
R when you have not redirected standard input.
Bill Dunlap
Spotfire, TIBCO Software
wdunlap tibco.com
-Original Message-
From:
On May 2, 2012, at 6:23 PM, Jonathan Greenberg wrote:
R-helpers:
What would be the absolute fastest way to make a large empty file (e.g.
filled with all zeroes) on disk, given a byte size and a given number
number of empty values. I know I can use writeBin, but the object in
this case
On most UNIX systems this will leave a large unallocated virtual hole in the
file. If you are not bothered by spreading the allocation task out over the
program execution interval, this won't matter and will probably give the best
performance. However, if you wanted to benchmark your
Worked like a charm. Thanks for the help! It's really appreciated.
--
View this message in context:
http://r.789695.n4.nabble.com/scanning-a-data-set-for-strings-tp4602953p4603410.html
Sent from the R help mailing list archive at Nabble.com.
__
Hi,
I am using coxph from the survival package to fit a large model
(100,000 observations, ~35 covariates) using both ridge regression (on
binary covariates) and penalized splines (for continuous covariates).
In fitting, I get a strange error:
Error in if (abs((y[nx] - target)/(y[nx - 1]
Hi,
How can a function in R handle different types of input?
I have written a function, which should calculate the slope from several
3-time-point measurements by linear regression
4 three-time-point-measurements:
x-cbind(c(1,2,3,4),c(2,3,4,5),c(3,4,5,6))
time points:
time-c(1,3,9)
function
Hi,
I have a list of data, e.g. r[[i]]
r[[1]]=
1 2
1 6
5 5
5.5 3
r[[2]]=
46
35
78
35
…
r[[500]].
In the first column, the selected values should like this:
(the later value)-(the former value)=1
In the second column, the selected values should like this:
(the
Hello,
mpostje wrote
Hi
I've been trying to convert numbers from an online temperature database
into dates and time that R recognizes. the problem is that the database
has put a T
between the numbers and R will not accept any conversions.
this is the format that it's in now
Hello,
I must be missing something very obvious, but I just cannot see it.
The hardest to find errors.
Wrong manual calculation in t1, powers have precedence right to left and
before additions.
t1b - 10^(params[1]-xx)^params[2]
t3b - 1 + t1b
t4b - t2/t3b
t5b - params[4] + t4b
all.equal(yy,
Ey R-people
Trying to do a binary logistic regression with only
categorical(age,ballot) and binary predictors for a binary response
variable. I can model them at least at least i treat the binary
predictors as categorical with the use of as.factor(etc) and use glm
with binomial distribution
I'm looking for a function for the multiple correlation among three variables.
I have created three vectors (x, y and z) and I want to find a correlation
coefficient and evaluate its significance.
Can anyone help me?
Thanks in advance.
[[alternative HTML version deleted]]
Hello,
I have applied the Shapiro test to a matrix with 26.925 rows of data using the
following
F1.norm-apply(F1.n.mat,1,shapiro.test)
I would now like to view and export a table of the p and W values from the
Shapiro test, but I am not sure how to approach this.
I have tried the
That did it! Thanks very much Berend.
On Wed, May 2, 2012 at 4:19 AM, Berend Hasselman b...@xs4all.nl wrote:
On 02-05-2012, at 07:22, Kaushik Krishnan wrote:
Hey folks
I'm sorry for bringing what must be a very simple question to R-help,
but after some research I haven't been able to find
1 - 100 of 131 matches
Mail list logo