Like this?
con <- textConnection(object = 'A,B\n1,NaN\nNA,2')
> tmp <- read.table(con, header = TRUE, sep = ',', na.strings = '',
stringsAsFactors = FALSE,
+ colClasses = c("numeric", "character"))
> close.connection(con)
> tmp
A B
1 1 NaN
2 NA 2
> class(tmp[,1])
[1]
On 10/22/19 10:19 PM, Yeasmin Alea wrote:
Thank you. Can you please have a look the below data sets, script and
question?
*Dataset-1: Pen*
*YEAR DAY X Y Sig phase *
* *
*1 1981 9 -0.213 1.08 1.10 Phase-7*
*2 198110 0.065 1.05 1.05 Phase-6*
On 10/22/19 12:48 PM, Luigi Marongiu wrote:
I thought it was a major package for ecological analysis.
Yours is the first question in 20 years of Rhelp about the package iNEXT.
--
David
Anyway,
thank you for the tips. I'll dip from there.
On Tue, Oct 22, 2019 at 5:29 PM Jeff Newmiller
Dear
I investigated herbicide affect on weeds growth (dose-response). I used drc
package and fitted data to best model by mselect function and parameters of
models. For some data, Logistic model was the best and for others, Weibull.
But when I wanted to draw relative potency curve, the R-program
Thank you. Can you please have a look the below data sets, script and
question?
*Dataset-1: Pen*
*YEAR DAY X Y Sig phase *
* *
*1 1981 9 -0.213 1.08 1.10 Phase-7*
*2 198110 0.065 1.05 1.05 Phase-6*
*Dataset-2: Book*
*YEAR Time *
*1 1981
Apparently, the iNEXT package was first described in an academic paper
published in 2016, although CRAN archives go back to 2015.
http://chao.stat.nthu.edu.tw/wordpress/paper/120_pdf_appendix.pdf
https://cran.r-project.org/src/contrib/Archive/iNEXT/
The vignette below has a section entitled
Hello,
I have two data frames like this:
> head(l4)
X1X2 X3 X4 X5 variant_id pval_nominal gene_id.LCL
1 chr1 13550 G A b38 1:13550:G:A 0.375614 ENSG0227232
2 chr1 14671 G C b38 1:14671:G:C 0.474708 ENSG0227232
3 chr1 14677 G A b38 1:14677:G:A 0.699887
Ah, it looks like a memory allocation problem.
Jim
On Thu, Oct 24, 2019 at 10:05 AM Ana Marija wrote:
>
> I also tried left_join but I got: Error: std::bad_alloc
>
> > df3 <- left_join(l4, asign, by = c("chr","pos"))
> Error: std::bad_alloc
> > dim(l4)
> [1] 166941635 8
> > dim(asign)
>
Hi Ana,
When I run this example taken from your email:
l4<-read.table(text="X1 X2 X3 X4 X5 variant_id pval_nominal gene_id.LCL
chr1 13550 G A b38 1:13550:G:A 0.375614 ENSG0227232
chr1 14671 G C b38 1:14671:G:C 0.474708 ENSG0227232
chr1 14677 G A b38 1:14677:G:A 0.699887
I don't have it installed - that was merely a suggestion. I notice
that both data.table and dplyr packages are mentioned as possibilities
for "merge big datasets in r". Apparently the best way to do it if you
have a database manager is to read the two datasets into tables and do
the join via SQL
no can you please send me an example how the command would look like in my case?
On Wed, Oct 23, 2019 at 6:16 PM Jim Lemon wrote:
>
> Yes. Have you tried the bigmemory package?
>
> Jim
>
> On Thu, Oct 24, 2019 at 10:08 AM Ana Marija
> wrote:
> >
> > Hi Jim,
> >
> > I think one of the issue is
On 23/10/2019 7:04 p.m., Ana Marija wrote:
I also tried left_join but I got: Error: std::bad_alloc
df3 <- left_join(l4, asign, by = c("chr","pos")
Error: std::bad_alloc
Looks like bugs in whatever package you're finding "left_join" in (and
previously "merge"). Are those from dplyr and
I am using R-3.6.1
and these libraries:
library(data.table)
library(dplyr)
On Wed, Oct 23, 2019 at 6:54 PM Duncan Murdoch wrote:
>
> On 23/10/2019 7:04 p.m., Ana Marija wrote:
> > I also tried left_join but I got: Error: std::bad_alloc
> >
> >> df3 <- left_join(l4, asign, by = c("chr","pos")
> >
Hi,
Is there a way to make read.table consider NaN as a string of characters rather
than the internal NaN? Changing the na.strings argument does not seems to have
any effect on how R interprets the NaN string (while is does not the the NA
string)
con <- textConnection(object =
I also tried left_join but I got: Error: std::bad_alloc
> df3 <- left_join(l4, asign, by = c("chr","pos"))
Error: std::bad_alloc
> dim(l4)
[1] 166941635 8
> dim(asign)
[1] 107371528 5
On Wed, Oct 23, 2019 at 5:32 PM Ana Marija wrote:
>
> Hello,
>
> I have two data frames like
Yes. Have you tried the bigmemory package?
Jim
On Thu, Oct 24, 2019 at 10:08 AM Ana Marija wrote:
>
> Hi Jim,
>
> I think one of the issue is that data frames are so big,
> > dim(l4)
> [1] 166941635 8
> > dim(asign)
> [1] 107371528 5
>
> so my example would not reproduce the
thanks but I would need solution in R
On Wed, Oct 23, 2019 at 6:31 PM Jim Lemon wrote:
>
> I don't have it installed - that was merely a suggestion. I notice
> that both data.table and dplyr packages are mentioned as possibilities
> for "merge big datasets in r". Apparently the best way to do it
Hi Jim,
I think one of the issue is that data frames are so big,
> dim(l4)
[1] 166941635 8
> dim(asign)
[1] 107371528 5
so my example would not reproduce the error
On Wed, Oct 23, 2019 at 6:05 PM Jim Lemon wrote:
>
> Hi Ana,
> When I run this example taken from your email:
>
>
Ana... contributed packages like data.table and dplyr are developed completely
independently from R, have their own versions, and in fact both of them have
recommendations as to how to report bugs in their package descriptions.
As for getting help here, you really need to supply ALL of the
Hi
***do not think in if or if loops in R***.
to elaborate Jim's solution further
With simple function based on logical expression
fff <- function(x) (x!="")+0
you could use apply
t(apply(phdf[,3:5], 1, fff))
and add results to your data frame columns
phdf[, 6:8] <- t(apply(phdf[,3:5], 1,
Estimada Lorena Saavedra Aracena
No alcanzo a comprender su pregunta, en R la respuesta mucha veces depende
de los datos y en la forma en que estos son accedidos, por ejemplo, si
usted tiene un data.frame que se llama datos posiblemente summay(datos)
alcanza, si no es así alguna función sobre los
Buenas noches,
Soy nueva en R y a veces me cuesta pensar los cálculos de manera más
práctica, por los que les agradecería la ayuda.
Tengo una matriz de datos con una dim = 35745 19, correspondientes a
ubicaciones de 39 perros, cada perro tiene poco más o poco menos de 1000
datos.
Necesito saber
22 matches
Mail list logo