Hi Cristina,
As Rolf has noted, you probably don't want to persist with "lm" since
I think you have dichotomized your initial dependent variable. I also
think that you meant "don't worry about the change of variable names"
with "how I wrote the variables". I also think that you want to test
Hi lili,
The problem may lie in the fact that I think you are using
"interpolate" when you mean "extrapolate". In that case, the best you
can do is spread values beyond the points that you have. Find the
slope of the line, put a point at each end of your time data
(2009-01-01 and 2009-12-31) and
;arrows" with calls to "foobars".
Obviously "foobars" will only work for vertical bars, but could easily
be modified to handle horizontal bars. I think that should be all you
need.
Jim
On Thu, Jul 21, 2016 at 10:05 PM, Jim Lemon <drjimle...@gmail.com> wrote:
> Hi F
Oops, didn't translate that function correctly:
has_values<-function(x,values) {
if(is.list(x)) {
return(sum(unlist(lapply(svlist,
function(x,values) return(all(values %in% x)),values
}
}
Jim
On Wed, Jul 20, 2016 at 7:18 PM, Jim Lemon <drjimle...@gmail.com> wrote:
> Hi
Hi sri,
Maybe something like this?
has_values<-function(x,values) {
if(is.list(x)) {
return(sum(unlist(lapply(svlist,
function(x,values) return(all(values %in% x)),c(11,12)
}
}
svlist<-list(a=c(11,15,12,25),
b=c(11,12),
c=c(15,25),
d=c(134,45,56),
e=46,
f=c(45,56),
g=c(15,12),
Hi Daniel,
Judging by the numbers you mention, the distribution is either very
skewed or not at all normal. If you look at this:
plot(c(0,0.012,0.015,0.057,0.07),c(0,0.05,0.4,0.05,0),type="b")
you will see the general shape of whatever distribution produced these
summary statistics. Did the
Hi Lawrence,
Try installing pbkrtest on its own:
update.packages("pbkrtest")
version 0.4.6 is up on CRAN, so this may allow you to make an end run.
Jim
On Sun, Jul 17, 2016 at 2:06 AM, Lawrence A. Janowitch
wrote:
> I'm trying to load the caret package in R-Studio
quot;,"Leon"),3),prop=1)
I don't see any other way to put the names on the y-axis that makes any sense.
Jim
On Fri, Jul 15, 2016 at 5:14 PM, Dagmar <ramga...@gmx.net> wrote:
> Dear all, dear Jim,
>
> Thank you for trying to help Jim. Unfortunately it didn't solv
Hi Christa,
The error messages tell you that the file contains NULL characters,
which can cause problems with reading files. You can remove these
characters with a "hex editor". I am not familiar with those used on
Windows, but look at this Web page:
datst1)" give me
> the error.
>
> Regards,
>
>
>
> On Thursday, July 14, 2016 3:10 AM, Jim Lemon <drjimle...@gmail.com> wrote:
>
>
> Hi Elham,
> It looks to me as though you have created the numeric variable "ID"
> and then passed it to a func
Hi Kyle,
First, see if you can identify which data are getting lost. This will
often reveal what is losing them if there is some common
characteristic.
If not, try to create some toy data (a puddle, not a lake) that will
produce the same problem. Then send an email with the toy data as
formatted
Hi Elham,
It looks to me as though you have created the numeric variable "ID"
and then passed it to a function that expects it to be a character
variable. Try changing the line:
ID<-60101:60128
to:
ID<-paste("ID",60101:60128,sep="")
and see what happens.
Jim
On Wed, Jul 13, 2016 at 8:29 PM,
Hi Tagmarie,
This might help:
datframe$numberdata<-as.numeric(as.character(datframe$numberdat))
library(plotrix)
barcol<-color.scale(datframe$numberdat,extremes=c("black","white"))
barplot(matrix(datframe$numberdat,nrow=2,byrow=TRUE),
beside=TRUE, horiz=TRUE,names.arg=paste("Week",1:3),
Hi Matthew,
This question is a bit mysterious as we don't know what the object
"chr" is. However, have a look at this and see if it is close to what
you want to do.
# set up a little matrix of character values
tTargTFS<-matrix(paste("A",rep(1:4,each=4),"B",rep(1:4,4),sep=""),ncol=4)
# try the
Hi Julia,
You seem to be looking for a test for trend in proportions in the
first question. Have a look at this page:
http://sphweb.bumc.bu.edu/otlt/MPH-Modules/BS/R/R6_CategoricalDataAnalysis/R6_CategoricalDataAnalysis6.html
The second question may require GLMs using experimental condition as a
Hi Kristi,
The period is there for a reason. If you want to extract that column like this:
x<-data.frame(a=1:3,b=2:4,c=3:5)
> names(x)[3]<-"dif of AB"
> x
a b dif of AB
1 1 2 3
2 2 3 4
3 3 4 5
> x$dif of AB
Error: unexpected symbol in "x$dif of"
> x$'dif of AB'
[1] 3 4 5
Hi Cristina,
Try this:
names(mydata)
It may be NULL or "ppitrst" may be absent.
Jim
On Thu, Jul 7, 2016 at 8:26 PM, Cristina Cametti
wrote:
> Dear all,
>
> I am not able to find a reliable r code to run a multilevel latent class
> model. Indeed, I have to analyze
Hi Marietta,
You may not be aware that the variable k is doing nothing in your
example except running the random variable generation 2 or 3 times for
each cycle of the outer loop as each successive run just overwrites
the one before. If you want to include all two or three lots of values
you will
Hi Clemence,
I don't have sciplot installed, but the help page suggests that the
"xaxt" argument is available. This will prevent the x axis from being
displayed and you can then specify the x axis you want. Assume that
you want an x axis from 0 to 300 by 50:
axis(1,at=seq(0,300,by=50))
Jim
On
Hi Rolf,
A bit of poking around reveals that different fonts have different
size asterisks. If you are not already using Times Roman, it might be
worth a look. I only have Windows on this (work) PC, so I can't check
the Postscript fonts.
Jim
On Fri, Jul 1, 2016 at 12:50 PM, Rolf Turner
Hi Carlos,
The STATA function assumes estimated population SDs. If you have
sample SDs you can specify with this:
combine<-function(n,mu,sd,sd.type=c("pop","sample")) {
N<-sum(n)
mean<-sum(n*mu)/N
if(sd.type[1]=="sample") {
meanss<-(n[1]*(mean-mu[1])^2+n[2]*(mean-mu[2])^2)/N
Hi,
I don't have excel.link, but have you tried:
"=G9*100/G6\n"
Jim
On Thu, Jun 30, 2016 at 10:34 PM, wrote:
> Hi All,
>
> I am using excel.link to work seemslessly with Excel.
>
> In addition to values, like numbers and strings, I would like to insert a
> full
Hi Doug,
To expand a bit on what Bert has written, all the the "best
subset/best model" procedures use random variation in the dataset to
produce a result. This means that you will almost certainly include
variables in your "best model" that cannot be replicated. Sometimes
you can see this as a
Hi Vasilis,
Your question has more to do with telepathy than statistics, but I
will attempt an answer anyway. You have in your possession a matrix
(perhaps) named Matrix. Within are at least two columns of something,
one of which contains dates. You have revealed that the other column
contains
Hi Ken,
As far as I can see, ggtitle accepts a single string. The help page is
a bit obscure, implying that you can change the title with the "labs"
function(?), but using the same explicit string in the "ggtitle" line,
perhaps for didactic purposes. You seem to be asking to substitute
your own
Hi Tanvir,
How about this:
value<-1
(1:length(d))[unlist(lapply(lapply(d,"==",value),any))]
Jim
On Tue, Jun 28, 2016 at 5:03 PM, Mohammad Tanvir Ahamed via R-help
wrote:
> Can any one please help me. I will apply this for a very large list, about
> 400k vector in a list
owrate+0.5*rain)
> datafram=data1
>
> Change the value of fertelizer--à datafram=data2
>
> predict(lm(nitrate~0.9*fertilizer2-0.02*flowrate+0.5*rain), datafram=data2
>
> Is that better now?
>
>
>
> --
> *From:* Jim Lemon <drjimle...@
3
> winter site1 9.2 2 4
> winter site2 10.2 4 4
> winter site3 11.2 4.5 4
> Would you please tell me how I can do this in R?
>
> Cheers
>
> Rezvan
>
>
> --
> *From:* Jim Lemon <drjimle...@gmail.com>
> *To:* rezvan hatami <rez
Hi Rezvan,
I'll take a guess that you have been presented with a matrix of
coefficients. You probably know that a linear model is going to look
something like this:
Y = ax1 + bx2 + cx3 ...
So I will further guess that you want to infer a distribution of Y
(the response variable) from more than
Hi Marius,
There are a few things that are happening here. First, the plot area
is not going to be the same as your x and y limits unless you say so:
# run your first example
par("usr")
[1] -0.04 1.04 -0.04 1.04
# but
plot(NA, type = "n", ann = FALSE, axes = FALSE,
xlim = 0:1, ylim =
Now why didn't I think of that?
apply(matrix(c(a,b),ncol=2),1,function(x)x[1]:x[2])
Jim
On Wed, Jun 22, 2016 at 6:14 PM, Rolf Turner <r.tur...@auckland.ac.nz> wrote:
> On 22/06/16 20:00, Jim Lemon wrote:
>>
>> Hi Tanvir,
>> Not at all elegant, but:
>>
>>
Hi Tanvir,
Not at all elegant, but:
make.seq<-function(x) return(seq(x[1],x[2]))
apply(matrix(c(a,b),ncol=2),1,make.seq)
Jim
On Wed, Jun 22, 2016 at 5:32 PM, Mohammad Tanvir Ahamed via R-help
wrote:
> Hi,
> I want to do the follow thing
>
> Input :
> a <- c(1,3,6,9)
>
>
ot;
>> length(data1)
> [1] 1
>
> I need to calculate two normalizations with the vectors lens and cnts, and
> have the two options for sorting the normalizations up or down.
>
> Thanks for any help you can give me to fix this issue.
>
> Humberto
>
>> On Jun 18, 2016, at
Hi Lucie,
You can visualize this using the sizetree function (plotrix). You
supply a data frame of the individual choice sequences.
# form a data frame of "random" choices
coltrans<-data.frame(choice1=sample(c("High","Medium","Low"),100,TRUE),
choice2=sample(c("High","Medium","Low"),100,TRUE))
primal : NULL
> $ dual : NULL
> $ ux : NULL
> $ vy : NULL
> $ gamma :function (x)
> $ ORIENTATION: chr "in"
> $ TRANSPOSE : logi FALSE
> $ param : NULL
> - attr(*, "class")= chr "Farrell"
>
>
>
Hi Humberto,
The "0 row" error usually arises from a calculation in which a
non-existent object is used. I see that you have created a vector with
the name "lens" and that may be where this is happening. Have a look
at:
length(lens)
or if it is not too long, just:
lens
If it is zero length,
Hi Shane,
Try the "Kendall" package.
Jim
On Fri, Jun 17, 2016 at 7:47 PM, Shane Carey wrote:
> Hi,
>
> I was hoping someone could help me. I was wondering are there any libraries
> available to undertake a kendall correlation on a matrix of data, in the
> same way as what
Hi Pradip,
I'll assume that you are reading the data from a file:
pm.df<-read.csv("pmdat.txt",stringsAsFactors=FALSE)
# create a vector of numeric values of prevalence
numprev<-as.numeric(sapply(strsplit(trimws(pm.df$prevalence)," "),"[",1))
# order the data frame by that vector
Hi farzana,
Probably the first thing is to ascertain what the class of "farzana" might be:
class(farzana)
Because "write.csv" expects "the object to be written, preferably a
matrix or data frame. If not, it is attempted to coerce x to a data
frame." to be the first argument. It seems that
Hi Alice,
Have you tried creating a vector of the start position (xpos[1],ypos[1]):
xstart<-rep(xpos[1],n)
ystart<-rep(ypos[1],n)
# where "n" is the number of subsequent positions in the trip
max(trackDistance(xstart,ystart,xpos[2:n],ypos[2:n],...))
may then give you the value of the longest
I'm still unsure of what you are attempting to do with this data.
First, it is very sparse, appearing to be the counts of occurrences of
2567 strings, some of which are recognizable English words. I suspect
that you are trying to get something very simple like the frequency of
these strings within
there are 40309 rows and 26952 columns. file
> size is 110 MB.Please guide
> me what is wrong.
>
> Shashi
> On Thu, 09 Jun 2016 14:27:17 +0530 Jim Lemon wrote
>>Hi Shashi,
>
> Without trying to go through all that code, your error is something
>
> simple. Whe
Hi JI,
The most likely problems are negative numbers for sd or "k" being
larger than the number of mu.m2 or disp.m2 values.
Jim
On Wed, Jun 15, 2016 at 4:06 AM, JI Cho wrote:
> Dear R users,
>
> I have been using rnorm, rbinom and have been getting the following warning
>
>> In addition: Warning messages:
>> 1: In min(x) : no non-missing arguments to min; returning Inf
>> 2: In max(x) : no non-missing arguments to max; returning -Inf
>>
>>
>> On Mon, Jun 13, 2016 at 3:19 AM, Jim Lemon <drjimle...@gmail.com> wrote:
>>>
>
Hi Fahman,
That error message usually means that there is no newline at the end
of the last line of the input file. Try adding a newline,
Jim
On Tue, Jun 14, 2016 at 1:17 AM, Fahman Khan via R-help
wrote:
> I have written a following piece of code.
>> binaryFile <-
Hi Greg,
Okay, I have a better idea now of what you want. The problem of
multiple matches is still there, but here is a start:
# this data frame actually contains all the values in ref in the "reg" field
map<-read.table(text="reg p rate
10276 0.700 3.867e-18
71608 0.830 4.542e-16
29220 0.430
Hi Greg,
You've got a problem that you don't seem to have identified. Your
"reg" field in the "map" data frame can define at most 10 unique
values. This means that each value will be repeated about 270 times.
Unless there are constraints you haven't mentioned, we would expect
that in 135 cases
Hi Francisco,
Your example plot shows me what you want to do (I think). I'm guessing that
you want to display the values in your matrix that are NOT zero or NA,
either colored in some way, or just in one color as the example. The
following example shows how to do both of these:
#
Hi Francisco,
I tried this just to see if it would work. It did, after a while.
wtmat<-matrix(rnorm(4602*1817),nrow=4602)
library(plotrix)
x11(width=5,height=13)
color2D.matplot(wtmat,c(1,1,0),c(0,1,0),0,border=FALSE)
Jim
On Fri, Jun 10, 2016 at 8:27 AM, FRANCISCO XAVIER SUMBA TORAL
Hi John,
With due respect to the other respondents, here is something that might help:
# get a vector of values
foo<-rnorm(100)
# get a vector of increasing indices (aka your "recent" values)
bar<-sort(sample(1:100,40))
# write a function to "clump" the adjacent index values
Hi Joonas,
It is easy to display hierarchic classification using either the
plot.dendrite or sizetree functions (plotrix). At the moment, they
will only display counts, not percentages. It would not be too
difficult to reprogram either one to display percentages. Here are
examples with shortened
Hi Tjun Kiat,
The following examples work for me. One uses the dates you have
specified, adding one weekly date to cover the range of your daily
dates, otherwise you will generate NAs. The second defines weekly
breaks for a year and then simulates daily dates throughout that year.
Remember that
Hi Stefano,
I might be missing something, but try this:
MteBove<-read.table(text="posix_date posix_time snowtemp
2010-01-19 23:30:00 45 NA
2010-01-20 00:30:00 10 2.7
2010-01-20 03:00:00 45 NA
2010-01-20 03:30:00 44 NA
2010-01-20 04:00:00 44 NA
2010-01-20 04:30:00 44 NA
2010-01-20
w to get the column names from the args[] to the
> aes(x=?, y=?).
> There must be some kind of indirect reference or eval() or substitute()
> operator in R but I can't find it.
>
> Anyway, thanks for taking a shot at this.
>
> Best,
>
> doug
>
>
>
>
> On Sun, Jun 5
Hi Doug,
I think this will work for you:
adl1<-read.csv("test.csv")
adl1[,"a"]
[1] 1 4 7
so, adl1[,args[1]] should get you the column that you pass as the
first argument.
Jim
On Mon, Jun 6, 2016 at 5:45 AM, Douglas Johnson wrote:
> I'm guessing this is trivial but I've
Hi Nick,
I think you want to get the maximum run length:
jja<-runif(92,0,16)
med_jja<-median(jja)
med_jja
[1] 7.428935
# get a logical vector of values greater than the median
jja_plus_med<-jja > med_jja
# now get the length of runs
runs_jja_plus_med<-rle(jja_plus_med)
# finally find the maximum
Hi EKE,
Your problem may be that the date strings are being read as a factor.
Try using stringsAsFactors=FALSE when you read the data in. Another
way is to convert your dates to strings when passing to as.Date:
as.Date(as.character(mydf$Date),"%m/%d/%Y")
Jim
On Sun, Jun 5, 2016 at 10:53 PM, Ek
Hi Gafar,
As Jeff has pointed out, the median value may not exist within the
dataset. However, this function will give you either the position of
the value that is the median, or the position of the two closest
values if none equal the median. Be aware that this function may fall
victim to the
Hi Michael,
Have a look at my.symbols in the TeachingDemos package.
Jim
On Sat, Jun 4, 2016 at 4:19 AM, Michael Weber
wrote:
>
> Dear R users,
>
> I have been using R for several years and really appreciate all the
> developments which have been done. Maybe you can
Hi Tjun Kiat,
This seems to work:
daily_date<-as.Date(paste("2000-01",1:28,sep="-"),"%Y-%m-%d")
weekly_date<-as.Date(paste(c(1,8,15,22,28),"01/2000",sep="/"),
"%d/%m/%Y")
cut(daily_date,breaks=weekly_date,include.lowest=TRUE,
labels=paste("Week",1:4))
Jim
On Fri, Jun 3, 2016 at 6:00 PM, TJUN
Hi ce,
a<-10
condition<-expression("a>0")
if(eval(parse(text=condition))) cat("a>0\n")
Jim
On Thu, Jun 2, 2016 at 12:30 PM, ce wrote:
>
> Dear all,
>
> I want to make an if condition variable like :
>
> a = 10
> CONDITION = " a > 0 "
>
> if ( CONDITION ) print(" a is
t;80+"))
age
value.labels(age)
Jim
On Thu, Jun 2, 2016 at 3:37 AM, <g.maub...@weinwolf.de> wrote:
> Hi Jim,
>
> many thanks for the hint.
>
> When looking at the documentation I did not get how I do control which
> value gets which label. Is it possible to define
Hi Georg,
You may find the "add.value.labels" function in the prettyR package useful.
Jim
On Tue, May 31, 2016 at 10:00 PM, wrote:
> Hi All,
>
> I am using R for social sciences. In this field I am used to use short
> variable names like "q1" for question 1, "q2" for
Hi Jun,
As you do seem to want to replace commas within, not between, strings, try gsub:
gsub(",",";",test[,1])
Jim
> Dear list,
>
> Say I have a data frame
>
> test <- data.frame(C1=c('a,b,c,d'),C2=c('g,h,f'))
>
> I want to replace the commas with semicolons
>
> sub(',',';',test$C1) -> test$C1
Hi Mark,
As with most other annoying, time-consuming operations in the internet
and computing in general, blame the spammers and the scammers. The R
help list is attacked by both and requires both automatic and human
scanning of messages to minimize intrusions. Unfortunately no one has
come up
Hi Naresh,
Have a look a the addtable2plot function (plotrix), especially the
second example, and the color.scale function, also in plotrix.
Jim
On Sat, May 28, 2016 at 11:10 PM, Naresh Gurbuxani
wrote:
> I want to print a table where table elements are colored
Hi Lorenzo,
Maybe:
tt<-tt[!is.nan(tt)]
Jim
On Fri, May 27, 2016 at 8:14 PM, Lorenzo Isella
wrote:
> Dear All,
> I am sure the answer is a one liner, but I am banging my head against
> the wall and googling here and there has not helped much.
> Consider the following
Hi Kimmo,
I was unable to work out how to do this in lattice, but this might help:
kedf<-read.table(text="Group.1Group.2 x Freq
deutschland achtziger 2.001
deutschlandalt 1.254
deutschland anfang -2.001
deutschlandansehen 1.002
deutschland
Hi Beatriz,
I'll guess that you have a number of files with names like this:
Samples_1.txt
Samples_2.txt
...
Each one can be read with a function like read.table and will return a
data frame with default names (V1, V2, ...). You then want to extract
the first element (column) of the data frame.
t;sites", cex=1, type="p",scaling=-3)
>
> Thank you very much again
>
> Jackson
>
> 2016-05-23 23:08 GMT-03:00 Jim Lemon <drjimle...@gmail.com>:
>>
>> Hi Jackson,
>> One way to assign colors to values is:
>>
>> library(plotrix)
>&g
Hi Jackson,
One way to assign colors to values is:
library(plotrix)
ages<-seq(1, 5, by = 100)
agecol<-color.scale(ages,extremes=c("purple","red"))
Then just use "agecol" for your point colors.
Jim
On Tue, May 24, 2016 at 11:57 AM, Jackson Rodrigues
wrote:
>
gt;
> boolnet allow to us to create the network and write a logical rules in text
> file, then we can loaded it as a network inside R to study the dynamic
> behavior.
>
> but i can not work on it because this error and converting it to data file.
>
>
>
>
>
>
>
>
Hi Mohammad,
I don't have the BoolNet package installed, but the error means that
the object "cellcontrol" is not there for the function to use. It
should be a network "generated by generateRandomNKNetwork, or
reconstructed by reconstructNetwork" as detailed in the help pages.
Jim
On Mon, May
Hi Steven,
as.data.frame(sapply(a,"*",p))
Jim
On Mon, May 23, 2016 at 8:22 AM, Steven Yen wrote:
> Dear R users:
>
> > # p is a vector if length 10
> > # a is a vector if length 3
> > # I like to create a matrix with
> > # the first column being p multiplied by a[1]
>
Hi laomeng_3,
Have a look at the padjust function (stats).
Jim
On Fri, May 20, 2016 at 1:56 AM, laomeng_3 wrote:
> Hi all:
> As to the anova, we can perform multiple comparison via TukeyHSD.
> But as to chi-square test for frequency table,how to perform multiple
>
Hi John,
I may be misunderstanding what you want, but this seems to produce the
output you specify:
A<-sample(-10:100,100)
i<-rep(1:10,c(5:13,19))
# replace the last value of x with the maximum
max_last<-function(x) return(c(x[-length(x)],max(x)))
as.vector(unlist(by(A,i,max_last)))
and this is
Hi Shailaja,
If you just want a line of words, it's not too difficult if you have
the word frequencies:
# take a common sentence
sentence<-"The quick brown fox jumps over the lazy dog"
words<-unlist(strsplit(sentence," "))
# make up some word frequencies
wordfreq<-c(10,1,2,2,3,4,10,6,5)
Hi again,
Sorry, didn't read that correctly. No.
Jim
On Tue, May 17, 2016 at 8:48 PM, Jim Lemon <drjimle...@gmail.com> wrote:
> Hi Kristi,
> Multiply the standard error by the square root of the sample size.
>
> Jim
>
>
> On Tue, May 17, 2016 at 8:09 PM,
Hi Kristi,
Multiply the standard error by the square root of the sample size.
Jim
On Tue, May 17, 2016 at 8:09 PM, Kristi Glover
wrote:
> Dear R User,
>
> I have a data with a mean and Standard Error (SE) but no sample size, I am
> wondering whether I can compute
Hi again,
Sorry, that should be:
chop_string<-function(x,ends) {
starts<-c(1,ends[-length(ends)]+1)
return(substring(x,starts,ends))
}
Jim
On Thu, May 12, 2016 at 10:05 AM, Jim Lemon <drjimle...@gmail.com> wrote:
> Hi Jan,
> This might be helpful:
>
> chop_string<-f
Hi Jan,
This might be helpful:
chop_string<-function(x,ends) {
starts<-c(1,ends[-length(ends)]-1)
return(substring(x,starts,ends))
}
Jim
On Thu, May 12, 2016 at 7:23 AM, Jan Kacaba wrote:
> Here is my attempt at function which computes margins from positions.
>
>
Hi Witold,
You could try Ben Bolker's "clean.args" function in the plotrix package.
Jim
On Wed, May 11, 2016 at 6:45 PM, Witold E Wolski wrote:
> Hi,
>
> I am looking for a documentation describing how to manipulate the
> "..." . Searching R-intro.html gives to many not
; sums1 > 0 && sums2 > 0)
> {
> out <- sum / ((sqrt(sums1) * sqrt(sums2)))
> }else
> {
> out <-0
> }
> End Calculation
>
> vec1 <- append(vec1,out);
> vec
Hi Shashi,
The assumption that anyone on the list apart from yourself knows what
"some calculation" involves is incorrect. I suspect that "what is
wrong" may be one of two things:
1) "some calculation" includes a very large number of operations,
perhaps leading to "disk-thrashing" when your 16GB
; close(zz)
>
> but the error persists.
>
> To me it looks like R is still accessing the file and not releasing the
> connection for other programs. close(zz) should have solved the problem
> but unfortantely it doesn't.
>
> What else could I try?
>
> Kind regards
>
Hi Georg,
I don't suppose that you have:
1) checked that the file "all.Rout" exists somewhere?
2) if so, looked at the file with Notepad, perhaps?
3) let us in on the secret by pasting the contents of "all.Rout" into
your message if it is not too big?
At a guess, trying:
close(zz)
might get
Hi Luca,
The function readHTMLtable is in the XML package, not httr. Perhaps
that is the problem as I don't see a dependency in httr for XML
(although xml2 is suggested).
Jim
On Tue, May 10, 2016 at 2:58 PM, Luca Meyer wrote:
> Hello,
>
> I am trying to run a code I have
Hi Prasad,
You are probably looking for linear modelling of some sort. The first
thing to do is to read the data into R (if you haven't already done
so). You will almost invariably have a _data frame_ in which the
columns will contain values for at least year and profit.
Then plot the profits of
Hi Emily,
I haven't tested this exhaustively, but it seems to work:
df<-data.frame(id=2001:3300,yrssmoke=sample(1:40,1300,TRUE),
cigsdaytotal=sample(1:60,1300,TRUE),yrsquit=sample(1:20,1300,TRUE))
dfNA<-sapply(df$id,"%in%",c(2165,2534,2553,2611,2983,3233))
# create your NA values
Hi Andreas,
Try installing plyr, arm, scales and mi separately. If you get an
error message about a version mismatch, that's where your problem is.
_Sometimes_ upgrading R will fix it, if the problem is that the
version you are downloading is too new for your R version.
Jim
On Thu, May 5, 2016
Hi Steven,
If this is just a one-off, you could do this:
grepl("age",x) & nchar(x)<4
returning a logical vector containing TRUE for "age" but not "age2"
Jim
On Wed, May 4, 2016 at 3:45 PM, Steven Yen wrote:
> Dear all
> In the grep command below, is there a way to identify
Hi Yasil,
If you look at what happens to a[,3] after the "strsplit" it is easy:
> a[,3]
[1] "a,b" "c,d"
Here a[,3] is two strings
a$c <- strsplit(a$c, ",")
> a[,3]
[[1]]
[1] "a" "b"
[[2]]
[1] "c" "d"
Now a[,3] is a two element list. What R probably did was to take the
first component of a[,3]
do at the moment.
Jim
On Sun, May 1, 2016 at 11:19 AM, jpm miao <miao...@gmail.com> wrote:
> Thanks.
> Could we print the row/column names, "alpha1" and "alpha2" to the csv file?
>
> 2016-04-30 17:06 GMT-07:00 Jim Lemon <drjimle...@gmail.com>:
>>
>
Hi Lars,
A mystery, but for the bodgy characters in your error message. Perhaps
there is a problem with R trying to read a different character set
from that used in the package.
Jim
On Sat, Apr 30, 2016 at 8:22 PM, Lars Bishop wrote:
> Hello,
>
> I can’t seem to be able to
itate=TRUE)
> alphatab
>
> A B Total
> A 8 10 18
> B 7 5 12
> C 9 11 20
> Total 24 26 50
>
>> sink("temp_table3.csv")
>> delim.xtab(alphatab,pct=NA,interdigitate=TRUE)
>> sink()
>> sink("temp_table3.csv", append=TRUE)
>>
Hi Atte,
I'm not sure that this actually works, and it's very much a quick hack:
sums_x<-function(x,addends=1,depth=1) {
if(depth==1) {
addends<-rep(addends,x)
addlist<-list(addends)
} else {
addlist<-list()
}
lenadd<-length(addends)
while(lenadd > 2) {
Hi jpm miao,
You can get CSV files that can be imported into Excel like this:
library(prettyR)
sink("excel_table1.csv")
delim.table(table(df[,c("y","z")]))
sink()
sink("excel_table2.csv")
delim.table(as.data.frame(table(df[,c("y","z")])),label="")
sink()
sink("excel_table3.csv")
Hi Georg,
You could just use this:
Umsatz_2011<-c(1,2,3,4,5,NA,7,8,NA,10)
Kunde_2011<-rep(0:1,5)
Check_Kunde_2011<-
c("OK","Check")[as.numeric(is.na(Umsatz_2011) & Kunde_2011 == 1)+1]
Check_Kunde_2011 will be a vector of strings.
Jim
On Tue, Apr 26, 2016 at 6:09 PM,
Hi Sunny,
Try this:
# notice that I have replaced the fancy hyphens with real hyphens
end<-c("2001-","1992-","2013-","2013-","2013-","2013-",
"1993-2007","2010-","2012-","1984-1992","1996-","2015-")
splitends<-sapply(end,strsplit,"-")
last_bit(x) return(x[length(x)])
sapply(splitends,last_bit)
Hi Adrian,
This is probably taking a long time. I first tried with 7x10^6 times
and values and it took several minutes. The following code does what I
expected:
amdat<-data.frame(time=1:70,value=rnorm(70,-4))
amdat$value[amdat$value<0]<-0
sum(amdat$value)
[1] 5.07101
1001 - 1100 of 3379 matches
Mail list logo