On Thu, Dec 13, 2012 at 3:22 AM, Chenyi Pan cp...@virginia.edu wrote:
Dear officer
I have a question concerning running R when I am doing my research. Can you
help me to figure that out?
I am now running a MCMC iteration in the R program. But it is always
stucked in some loop. This cause big
Good morning!
I have the following data frame (df):
X.outer Y.outer X.PAD1 Y.PAD1 X.PAD2 Y.PAD2 X.PAD3 Y.PAD3 X.PAD4
Y.PAD4
73 574690.0 179740.0 574690.2 179740.0 574618.3 179650 574729.2 179674 574747.1
179598
74 574680.6 179737.0 574693.4 179740.0 574719.0 179688 574831.8
HI,
now I have dataset:
Product Price_LC.1 Price_LC.2 Price_elasticity.1
Price_elasticity.2 Mean_Price Mean_Price_elasticity Trade_Price_Band Country
1 100 357580.1 527483.6 -4.1498383
-2.8459564 473934.0
Hi Raphael,
see below.
I have the following data frame (df):
...
df2
X.PAD2 Y.PAD2
73 574618.3 179650
74 574719.0 179688
75 574719.0 179688
76 574723.5 179678
77 574724.9 179673
78 574747.1 179598
79 574641.8 179570
80 574639.6 179573
81 574618.3 179650
82 NA NA
83 NA
df2 - df2[!is.na(df2),] isn't doing what you want it to do because
df2 is a data.frame and not a vector
to solve your problem, review
http://stackoverflow.com/questions/4862178/r-remove-rows-with-nas-in-data-frame
On Thu, Dec 13, 2012 at 3:20 AM, raphael.fel...@art.admin.ch wrote:
Good
HI,
I want to transform the following dataset
Product Price_LC.1 Price_LC.2 Price_elasticity.1 Price_elasticity.2 Mean_Price
Mean_Price_elasticity Trade_Price_Band Country
100 35 52 -4.14 -2.84
47 -3.69
is.na(df2) is not doing what you think it is doing. Perhaps you should read
?na.omit.
---
Jeff NewmillerThe . . Go Live...
DCN:jdnew...@dcn.davis.ca.usBasics: ##.#.
Thank you Nicole!
I did it with the color.palette function in the link you gave me.
I added then in my levelplot function a sequence with at:
at=seq(-40,40,1)
And it works quite good.
Thanks again Nicole.
Merci à toi aussi pascal, et vive le CRC ainsi que le grand C. C. !
;)
--
Mailing list ate the attachment.
Can you send it plain text (if short) or post it somewhere online?
Michael
On Dec 13, 2012, at 1:54 AM, Asis Hallab asis.hal...@gmail.com wrote:
Dear parallel users and developers,
I might have encountered a bug in the function 'mclapply' of package
HI,
Sorry for messing up..
I want to transform the following dataset:
product min_price max_price mean_price country price_band
11 34 50 40 VN 0-300
22 10 30 15 VN 0-300
Into:
Really sorry for messing up.
I want to transform:
product min_price max_price mean_price country price_band
11 34 50 40
VN 0-300
2210 30
Thank you all it worked after I checked for length of agrep's result :)
On Tue, Dec 11, 2012 at 6:11 PM, Rui Barradas ruipbarra...@sapo.pt wrote:
Hello,
Inline.
Em 11-12-2012 12:04, surekha nagabhushan escreveu:
Rui,
I have initialized it...doesn't seem to help...
result_vector -
Hello,
maybe something like this?
range - with(dat, paste0([, min_price, ,, max_price, ]))
dat2 - with(dat, data.frame(product = product, VN = mean_price, range =
range, price_band = price_band))
Unless it's a printing problem and you really want the range below VN.
Hope this helps,
Rui
Why don't you use one of the existing MCMC packages. There are many to
choose from...
On Wed, Dec 12, 2012 at 10:49 PM, Chenyi Pan cp...@virginia.edu wrote:
Dear all
I am now running a MCMC iteration in the R program. But it is always
stucked in some loop. This cause big problems for my
Hi
I am running R2.15.2 64-bit on Windows 7, using RODBC 1.3-6, MySQL5.5.20,
MySQL Connector 5.5.2 - these are the latest 64-bit versions AFAIK.
sqlQuery and sqlSave work fin as expected, but in a long session with a few
sqlSave() calls, I get an error, for example:
Error in sqlSave(channel =
Hi everyone,
I've followed the instructions from R-Admin Section 6.6 for creating a
local repository. I've modified my Rprofile.site file to add the local
repository to my repos, but I haven't been able to successfully install my
package from the repo.
Here's the code that I've run.
Thanks for your reply. I have compared my data with some other which works
and I cannot see the difference...
The structure of my data is shown below:
str(data)
'data.frame': 19 obs. of 7 variables:
$ drug: Factor w/ 19 levels A,B,C,D,..: 1 2 3 4 5 6 7 8 9 10
...
$ param1 : int 111
HI,
I have large dataset of many countries. I have written the program to run
through each country to generate one output for each country. I want to put the
output like this:
one sheet has output for one country. How do I achieve it by r.
I have tried this:
library(xlsx)
write.xlsx(nnn,
You can use complete.cases:
df - df[complete.cases(df), ]
On Thu, Dec 13, 2012 at 3:20 AM, raphael.fel...@art.admin.ch wrote:
Good morning!
I have the following data frame (df):
X.outer Y.outer X.PAD1 Y.PAD1 X.PAD2 Y.PAD2 X.PAD3 Y.PAD3
X.PAD4 Y.PAD4
73 574690.0 179740.0
use append = TRUE inside your write.xlsx() function
On Thu, Dec 13, 2012 at 7:52 AM, Tammy Ma metal_lical...@live.com wrote:
HI,
I have large dataset of many countries. I have written the program to run
through each country to generate one output for each country. I want to put
the
I use the XLConnect package to write out multiple sheets to an Excel workbook
On Thu, Dec 13, 2012 at 7:52 AM, Tammy Ma metal_lical...@live.com wrote:
HI,
I have large dataset of many countries. I have written the program to run
through each country to generate one output for each country.
Error: could not find function varimpAUC
Was this function NOT included in the Windows binary I
downloaded and installed?
Which windows binary are you talking about? The R installer, the Party .zip or
something else?
S Ellison
?try
?tryCatch
(if the suggestion to use an MCMC package does not fix your problem).
-- Bert
On Wed, Dec 12, 2012 at 7:49 PM, Chenyi Pan cp...@virginia.edu wrote:
Dear all
I am now running a MCMC iteration in the R program. But it is always
stucked in some loop. This cause big problems for
Webinar: Advances in Gradient Boosting: the Power of Post-Processing
TOMORROW: December 14, 10-11 a.m., PST
Webinar Registration:
http://2.salford-systems.com/gradientboosting-and-post-processing/
Course Outline:
I. Gradient Boosting and Post-Processing:
o What is missing from Gradient
I am now running a MCMC iteration in the R program. But it is
always stucked in some loop.
I never like it when folk just say Please read and follow the posting guide
referenced in every R help email but ... please, read and follow the posting
guide referenced in every R help email if you
Hello fellow
R-users,
Â
Iâm stuck
with something i think is pretty stupid, but i canât find out where iâm
wrong,
and itâs turning me crazy!
Â
I am doing
a very simple linear regression with Northing/Easting data, then I plot the
data as well as the regression line :
Â
You don't provide a reproducible example, but my first guess is that the
print method is rounding what appears on the screen, so you aren't actually
using the slope and intercept. See ?print.default and the digits argument
under ?options for more.
Why do you need to copy and paste the
Dear useRs,
In a thesis, I found a mention of the use of pairwise deletion in linear
regression and GLM (binomial family).
The author said that he has used R to do the statistics, but I did not find
the option allowing pairwise deletion in both lm and glm functions. Is
there somewhere a package
Hello List,
I am aware that one can set with recursion depth 'options(expressions
= #)', but it has 500K limit. Why do we have a 500K limit on this?
While some algorithms are only solvable feasibility with recursion
and 500K sounds not too much i.e. graph algorithms
for example dependency trees
Hi Arnaud,
A quick help search of lm or glm tells you that 'the factory-fresh default is
na.omit'.
If you then look up 'na.omit', you'll read that it 'returns the object with
incomplete cases removed'.
So, pairwise deletion is the default option in both lm and glm.
On a related note, it goes
Hi,everybody
I have a dataframe like this
FID IID STATUS
14621live
14628dead
24631live
24632live
24633live
24634live
64675live
64679dead
104716dead
104719live
104721dead
114726live
114728
Hi,
You could use either:
?na.omit() #the option was already suggested
#or
df2[complete.cases(df2),]
#In this case, this should also work
sapply(df2,function(x) x[!is.na(x)])
#or
apply(df2,2,function(x) x[!is.na(x)]) #If the NAs are not in the same rows,
then the ouptut will be a list with
Hi,
You could try this:
dat3-read.table(text=
product min_price max_price mean_price country price_band
11 34 50 40 VN 0-300
22 10 30 15 VN 0-300
,sep=,header=TRUE,stringsAsFactors=FALSE)
I have two dataframes (df) that share a column header (plot.id). In the
1st df, plot.id records are repeated a variable number of times based on
the number of trees monitored within each plot. The 2nd df only has a
single record for each plot.id, and contains a variable named load that
is
Hi! I am new to looping and R in general; and I have sent wy to much
time on this one problem and am about a hair away from doing it manually
for the next two days.
So, there is a package that while calculating the statistic creates lists
(that look like matrices) in the background. Each
And if by stuck you mean taking too long a time you can generate an error
at a given
time limit by using setTimeLimit() and tryCatch() or try() can catch that
error. E.g.
timeOut - function (expr, cpu = Inf, elapsed = Inf)
{
setTimeLimit(cpu = cpu, elapsed = elapsed, transient =
Hi Jose,
To my perception na.omit is different from a pairwise deletion.
With na.omit, you omit totally that case if there is a missing value for
one of the variable you consider in the model.
In the pairwise deletion, the case with some missing value is kept and
values that are not missing are
Hi Sarah,
If I understand your requirements correctly, the easiest thing to do
is approach it from a different direction:
df3a - merge(df1, df2)
But you can also use rep for this simple example because plot.id in
df2 is sorted:
nindex - table(df1$plot.id)
df3b - df2[rep(1:length(nindex),
Hello,
Something like this?
rep(df2$load, table(df1$plot.id))
Hope this helps,
Rui Barradas
Em 13-12-2012 14:15, Sarah Haas escreveu:
I have two dataframes (df) that share a column header (plot.id). In the
1st df, plot.id records are repeated a variable number of times based on
the number
Hi,
Try ?merge() or ?join() from library(plyr)
res-merge(df1,df2,by=plot.id)
head(res,6)
# plot.id tree.tag load
#1 plot1 111 17
#2 plot1 112 17
#3 plot1 113 17
#4 plot2 222 6
#5 plot2 223 6
#6 plot3 333 24
A.K
- Original Message
Sorry, Arnaud, I misinterpreted the question.
There isn't a built-in option in lm or glm to run pairwise deletion, but in the
'psych' package you can run regressions on covariance matrices rather than on
raw data. So, first, you can obtain a covariance matrix by cov() with the
option
Thanks Jose, but I doubt that the author of these analysis used such a
complex approach.
Arnaud
2012/12/13 Jose Iparraguirre jose.iparragui...@ageuk.org.uk
Sorry, Arnaud, I misinterpreted the question.
There isnt a built-in option in lm or glm to run pairwise deletion, but
in the
Hello,
my series of dates look like
[1] 2012-05-30 18:30:00 UTC 2012-05-30 19:30:00 UTC
[3] 2012-05-30 20:30:00 UTC 2012-05-30 21:30:00 UTC
[5] 2012-05-30 22:30:00 UTC 2012-05-30 23:30:00 UTC
[7] 2012-05-31 00:30:00 UTC 2012-05-31 01:30:00 UTC
[9] 2012-05-31 02:30:00 UTC 2012-05-31
On Dec 13, 2012, at 10:45 AM, Suzen, Mehmet wrote:
Hello List,
I am aware that one can set with recursion depth 'options(expressions
= #)', but it has 500K limit. Why do we have a 500K limit on this?
Because it's far beyond what you can handle without changing a lot of other
things. 500k
Hi,
May be this:
p-ggplot(subset(dat1,STATUS!=nosperm),aes(x=FID))
p+geom_bar(aes(x=factor(FID),y=..count..,fill=STATUS))
A.K.
- Original Message -
From: Yao He yao.h.1...@gmail.com
To: r-help@r-project.org
Cc:
Sent: Thursday, December 13, 2012 7:38 AM
Subject: [R] How to select a
Is this a one off or not? Why not do it manually? If you need to write a
function some example data would be helpful.
On Thu, Dec 13, 2012 at 10:52 AM, m p mzp3...@gmail.com wrote:
Hello,
my series of dates look like
[1] 2012-05-30 18:30:00 UTC 2012-05-30 19:30:00 UTC
[3] 2012-05-30
On Dec 13, 2012, at 8:52 AM, m p wrote:
Hello,
my series of dates look like
[1] 2012-05-30 18:30:00 UTC 2012-05-30 19:30:00 UTC
[3] 2012-05-30 20:30:00 UTC 2012-05-30 21:30:00 UTC
[5] 2012-05-30 22:30:00 UTC 2012-05-30 23:30:00 UTC
[7] 2012-05-31 00:30:00 UTC 2012-05-31 01:30:00 UTC
[9]
On Dec 13, 2012, at 9:16 AM, Nathan Miller wrote:
Hi all,
I have played a bit with the reshape package and function along with
melt and cast, but I feel I still don't have a good handle on
how to
use them efficiently. Below I have included a application of
reshape that
is rather clunky
Sorry David,
In my attempt to simplify example and just include the code I felt was
necessary I left out the loading of ggplot2, which then imports reshape2,
and which was actually used in the code I provided. Sorry to the mistake
and my misunderstanding of where the reshape function was coming
Hi:
I wonder if anyone can help me about cpos function not found error: :
path.package(cwhmisc, quiet = FALSE) [1]
C:/Users/slee/Documents/R/win-library/2.15/cwhmisc So I have package cwhmisc
where there is cpos function. But I got error:
cpos(ab,b,1) Error: could not find function cpos
Then I
Hi,
Try this:
seq1-seq(from=as.POSIXct(2012-05-30
18:30:00,tz=UTC),to=as.POSIXct(2012-05-31 02:30:00,tz=UTC),by=1 hour)
seq2-seq(from=as.POSIXct(2012-05-31
00:30:00,tz=UTC),to=as.POSIXct(2012-05-31 08:30:00,tz=UTC),by=1 hour)
seq3-seq(from=as.POSIXct(2012-05-31
Easting and northing data uses numbers requiring more digits than R's
default of 7. In all the years I've used R, the only time I've needed
to adjust the default digits is with easting and northing data.
Try something like
options(digits = 11)
HTH
On Thu, 13-Dec-2012 at 03:22PM +,
Try this ...
MDD.mean.s10 - sapply(MC_MDD.noNA$results, function(x) x[[2]][, 7])
Jean
On Thu, Dec 13, 2012 at 8:31 AM, Corinne Lapare corinnelap...@gmail.comwrote:
Hi! I am new to looping and R in general; and I have sent wy to much
time on this one problem and am about a hair away
Thats works perfectly, thanks a lot,
Mark
On Thu, Dec 13, 2012 at 11:34 AM, arun smartpink...@yahoo.com wrote:
Hi,
Try this:
seq1-seq(from=as.POSIXct(2012-05-30
18:30:00,tz=UTC),to=as.POSIXct(2012-05-31 02:30:00,tz=UTC),by=1
hour)
seq2-seq(from=as.POSIXct(2012-05-31
I have a large database on sql Server 2012 Developers edition, Windows 7
ultimate edition,
some of my tables are as large as 10GB,
I am running R15.2 with a 64-bit build
I have been connecting fine to the database and extracting info. but it seams
this was the first time I tried to pull a
On Dec 13, 2012, at 10:56 AM, Shirley Lee wrote:
Hi:
I wonder if anyone can help me about cpos function not found error: :
path.package(cwhmisc, quiet = FALSE) [1] C:/Users/slee/Documents/
R/win-library/2.15/cwhmisc So I have package cwhmisc where there is
cpos function. But I got error:
On 13 December 2012 17:52, Simon Urbanek simon.urba...@r-project.org wrote:
Because it's far beyond what you can handle without changing a lot of other
things. 500k expressions will require at least about 320Mb of stack (!) in
the eval() chain alone -- compare that to the 8Mb stack size which
Hi R users,
I am quite new to R and I don't know how to deal with this (surely) easy issue.
I need to replace words in sentences with as many hash marks as the number of
characters per each word, as in the following example:
Mary plays football
#
Any suggestion about the
On 13.12.2012 22:30, simona mancini wrote:
Hi R users,
I am quite new to R and I don't know how to deal with this (surely) easy issue.
I need to replace words in sentences with as many hash marks as the number of
characters per each word, as in the following example:
Mary plays football
Simona:
If you intend to work with text, you need to learn about regular
expressions. There are many tutorials on this topic on the web. Go search.
Then learn about how R handles them via:
?regex ## at the R prompt
Then ask your question more clearly, although by this time you'll probably
have
Hello.
Inline.
Em 13-12-2012 21:31, Suzen, Mehmet escreveu:
On 13 December 2012 17:52, Simon Urbanek simon.urba...@r-project.org wrote:
Because it's far beyond what you can handle without changing a lot of other
things. 500k expressions will require at least about 320Mb of stack (!) in the
R-helpers,
I have a vector of character strings in which I would like to replace each
parenthetical phrase with a single space, . For example if I start with
x, I would like to end up with y.
x - c(My toast=bog(keep=3 no=4) and eggs(er34)omit=32,
dogs have ears,
cats have tails (and ears,
My apologies. I sent too soon!
I did a bit more digging, and found a solution on the R-help archives.
y - gsub( *\\([^)]*\\) *, , x)
Jean
On Thu, Dec 13, 2012 at 4:53 PM, Adams, Jean jvad...@usgs.gov wrote:
R-helpers,
I have a vector of character strings in which I would like to replace
Hi,
I encountered the behavior, that the duplicated method for data.frames gives
false positives if there are columns of class POSIXct with a clock shift from
DST to standard time.
time - as.POSIXct(2012-10-28 02:00, tz=Europe/Vienna) + c(0, 60*60)
time
[1] 2012-10-28 02:00:00 CEST 2012-10-28
Hi,
You could also use:
gsub(\\w,#,Mary plays football)
#[1] #
#or
gsub([A-Za-z], #, Mary plays football)
A.K.
- Original Message -
From: Uwe Ligges lig...@statistik.tu-dortmund.de
To: simona mancini mancinisim...@yahoo.it
Cc: r-help@r-project.org r-help@r-project.org
Hi all,
I'm trying to figure out a way to create a data graphic that I haven't ever
seen an example of before, but hopefully there's an R package out there for it.
The idea is to essentially create a heatmap, but to allow each column and/or
row to be a different width, rather than having
Hi
Thanks for your reply. I have compared my data with some other which works and
I cannot see the difference...
The structure of my data is shown below:
str(data)
'data.frame': 19 obs. of 7 variables:
$ drug: Factor w/ 19 levels A,B,C,D,..: 1 2 3 4 5 6 7 8 9 10 ...
$ param1 :
On Dec 13, 2012, at 1:43 PM, Tobias Gauster wrote:
Hi,
I encountered the behavior, that the duplicated method for
data.frames gives false positives if there are columns of class
POSIXct with a clock shift from DST to standard time.
time - as.POSIXct(2012-10-28 02:00, tz=Europe/Vienna) +
Hi:
The simplest way to do it is to modify the input data frame by taking
out the records not having status live or dead and then redefining the
factor in the new data frame to get rid of the removed levels. Calling
your input data frame DF rather than data,
DF - structure(list(FID = c(1L, 1L,
Hello,
I have a table (in a txt file) which look like this:
Monday 12 78 89
Tuesday 34 44 67
Wednesday 78 98 2
Thursday 34 55 4
Then the table repeats Monday , Tuesday, ... followed by several numbers
My goal is to read values after the table. My problem is a little more
complicated,
On Dec 13, 2012, at 5:01 PM, David Winsemius wrote:
On Dec 13, 2012, at 1:43 PM, Tobias Gauster wrote:
Hi,
I encountered the behavior, that the duplicated method for data.frames gives
false positives if there are columns of class POSIXct with a clock shift
from DST to standard time.
What have you tried so far that did not work, and what do you want the result
of your reading the text file look like? What is store somewhere?
Why does
myDF - read.table( myData.txt )
which gives you
myDF
V1 V2 V3 V4
1Monday 12 78 89
2 Tuesday 34 44 67
3 Wednesday 78 98 2
Hi,
I guess there are some problems with spaces in this solution.
y
[1] My toast=bog and eggs omit=32 dogs have ears
[3] cats have tails
gsub( *\\([^)]*\\) *, , x)
#[1] My toast=bogand eggsomit=32 dogs have ears
#[3] cats have tails
You could
Hi,
I tried your dataset. I couldn't reproduce the Error: message. Instead,
mydata-read.table(text=
drug param1 param2 param3 param4 param5 class
A 111 15 125 40 0.5 1
B 347 13 280 55 3 2
C 335 9 119 89 -40 1
D 477 37 75 2 0 1
E 863 24 180 10 5 2
F 737 28 150 15 6 2
G 390 63 167 12 0 3
H 209 93
Hi,
If it is a dataframe with four columns.
dat1-read.table(text=
Monday 12 78 89
Tuesday 34 44 67
Wednesday 78 98 2
Thursday 34 55 4
Friday 14 25 13
Monday 18 75 56
Tuesday 28 42 65
,header=FALSE,stringsAsFactors=FALSE)
dat1Mon-dat1[,-1][dat1[,1]==Monday,] #rows with first column Monday
75 matches
Mail list logo