Thank you very much. Arun's reply is exactly what I need. Thank you once
again!~
ray
On Sat, Mar 16, 2013 at 12:31 AM, Berend Hasselman wrote:
>
> On 15-03-2013, at 17:08, Ray Cheung wrote:
>
> > Dear All,
> >
> > I've an array with some missing values (NA) in between. I want to remove
> > tha
Hi,
Try this:
directory<- "/home/arunksa111/dados"
#modified the function
GetFileList <- function(directory,number){
setwd(directory)
filelist1<-dir()
lista<-dir(directory,pattern = paste("MSMS_",number,"PepInfo.txt",sep=""),
full.names = TRUE, recursive = TRUE)
output<- list(filelist
I decided to follow up my own suggestion and look at the robustness of
nls vs. nlxb. NOTE: this problem is NOT one that nls() would usually be
applied to. The script below is very crude, but does illustrate that
nls() is unable to find a solution in >70% of tries where nlxb (a
Marquardt approac
Hi Ivo,
Try something like this:
rt(1e5, df = 2.6, ncp = (1 - 0) * sqrt(2.6 + 1)/2)
The NCP comes from the mean, N, and SD. See ?rt
Cheers,
Josh
On Fri, Mar 15, 2013 at 6:58 PM, ivo welch wrote:
> dear R experts:
>
> fitdistr suggests that a t with a mean of 1, an sd of 2, and 2.6
> degre
dear R experts:
fitdistr suggests that a t with a mean of 1, an sd of 2, and 2.6
degrees of freedom is a good fit for my data.
now I want to draw random samples from this distribution.should I
draw from a uniform distribution and use the distribution function
itself for the transform, or is t
The documentation is very straightforward, I suggest you describe what you
want to do in more detail and what you don't understand about the functions
when you try to use them. You basically create an array with ?ff or
data.frame with ?ffdf and proceed from there - each page has examples. All
I've
Hi,
I have a dataset with clustered data (observations within groups) and would
like to make some descriptive plots.
Now, I am a little bit lost on how to present the dispersion of the data (what
kind of residuals to plot).
I could compute the standard error of the mean (SEM) ignoring the clust
Forgot to mention: You might find the nlmrt package helpful but
I have no experience with that (yet).
Peter Ehlers
On 2013-03-15 07:57, Shane McMahon wrote:
I have a question regarding robust nonlinear regression with nlrob. I
would like to place lower bounds on the parameters, but when I call
On 2013-03-15 07:57, Shane McMahon wrote:
I have a question regarding robust nonlinear regression with nlrob. I
would like to place lower bounds on the parameters, but when I call
nlrob with limits it returns the following error:
"Error in psi(resid/Scale, ...) : unused argument(s) (lower = list
Hi Simon
the equivalent in xtable is
library(xtable)
xtable(test)
% latex table generated in R 2.15.2 by xtable 1.7-0 package
% Sat Mar 16 08:14:01 2013
\begin{table}[ht]
\begin{center}
\begin{tabular}{rrr}
\hline
& A & B \\
\hline
A & 50.00 & 50.00 \\
B & 50.00 & 50.00 \\
\hline
\end{
On 03/16/2013 12:58 AM, Tammy Ma wrote:
I have the following dataframe:
Productpredicted_MarketShare Predicted_MS_Percentage
A2.827450e-02 2.8
B4.716403e-06 0.0
C1.741686e-01
Forgot to cc: to list
- Forwarded Message -
From: arun
To: Marc Schwartz
Cc: Barry King ; Cc: Barry King
Sent: Friday, March 15, 2013 3:41 PM
Subject: Re: [R] Help finding first value in a BY group
Thanks Marc for catching that.
You could also use ?ave()
#unsorted
PeriodSKUForeca
ddply() is very handy, but sometimes it seems like overkill
to select rows from a dataset by pulling into pieces, selecting
a row from each piece, the pasting the pieces back together
again. Information like row names can be lost.
The following uses a subscript to pull out the rows of interest.
W
And as a footnote to the other replies, see
help('Math',package='base')
R's online help has a number of topics that are broader than that of a
single function, and that relatively new useRs might not have seen yet.
Examples include
?Distributions (compare with ?rnorm)
and
?Startup
-Don
I had the same problem.
--
View this message in context:
http://r.789695.n4.nabble.com/seeking-tip-to-keep-first-of-multiple-observations-per-ID-tp4661520p4661534.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org m
Hello:
I'm working with a 2-dimensional table that looks sort of like test below.
I'm trying to produce latex code that will add dimension names for both the
rows and the columns.
In using the following code, latex chokes when I include collabel='Vote' but
it's fine without it.
The code below pr
Hi,
If you can dput() a small part of your dataset e.g.
dput(head(yourdataset),20)), it would be helpful.
Otherwise,
dat1<- data.frame(ID=rep(1:3,times=c(3,4,2)),col2=rnorm(9))
aggregate(.~ID,data=dat1,head,1)
# ID col2
#1 1 -0.0637622
#2 2 1.1782429
#3 3 0.4670021
A.K.
- O
For the classes I teach at the University of Washington, we use:
http://www.compsoftbook.com. It's an automated scientific computing grading
system that supports R and includes MOSS checking. However, I would also be
very interested in alternative and possibly stand-alone MOSS-type tools that
can
On Mar 15, 2013, at 10:49 AM, Sebastian P. Luque wrote:
> Hi,
>
> It seems as if filled.contour can't be used along with layout(), or
> par(mfrow) or the like, since it sets the page in a very particular
> manner. Someone posted a workaround
> (http://r.789695.n4.nabble.com/several-Filled-conto
Rhelp is not here to fill in the gaps in your statistical education. You may
want to try CrossValidated, but even there you would be expected to do some
searching both of their website and in you textbooks.
On Mar 15, 2013, at 8:16 AM, Zia mel wrote:
> Hi
>
> If I get a p-value less than 0.05
Probably the first thing to do is supply some sample data
See https://github.com/hadley/devtools/wiki/Reproducibility for some
suggestions.
However you may want to take a look at
http://stackoverflow.com/questions/13279582/select-only-the-first-rows-for-each-unique-value-of-a-column-in-r
parti
Hi,
There is a potential gotcha with the approach of using head(..., 1) in each of
the solutions that Arun has below, which is the assumption that the data is
sorted, as is the case in the example data. It seems reasonable to consider
that the real data at hand may not be entered in order or pr
Dear R-help members
I would be grateful if anyone could help me with the following problem: I would
like to combine two matrices (SCH_15 and SCH_16, they are attached) which have
a species presence/absence x sampling plot structure. The aim would be to have
in the end only one matrix which sh
Hi
If I get a p-value less than 0.05 does that mean there is a
significant relation between the 2 ranked lists? Sometimes I get a low
correlation such as 0.3 or even 0.2 and the p-value is so low , such
as 0.01 , does that mean it is significant also? and would that be
interpreted as significa
I have a question regarding robust nonlinear regression with nlrob. I
would like to place lower bounds on the parameters, but when I call
nlrob with limits it returns the following error:
"Error in psi(resid/Scale, ...) : unused argument(s) (lower = list(Asym
= 1, mid = 1, scal = 1))"
After
Dear R community,
I am a neophyte and I cannot figure out how to accomplish keeping only the
first record for each ID in a data.frame that has assorted numbers of
records per ID.
I studied and found references to packages plyr and sql for R, and I fear
the documentation for those was over my head
Marc,
Thank you very much for your reply! It helps tremendously!
best,
Z
On Fri, Mar 15, 2013 at 2:37 AM, Marc Girondot wrote:
> Le 14/03/13 18:15, Zhuoting Wu a écrit :
>
> I have two follow-up questions:
>>
>> 1. If I want to reverse the heat.colors (i.e., from yellow to red instead
>> of
Dear R People:
I have the following situation. I have observations that are 128 samples
per second, which is fine. I want to fit them with ARIMA models, also fine.
My question is, please: when I do my forecasting, do I need to do anything
special to the "n.ahead" parm, please? Here is the ini
Hi,
It seems as if filled.contour can't be used along with layout(), or
par(mfrow) or the like, since it sets the page in a very particular
manner. Someone posted a workaround
(http://r.789695.n4.nabble.com/several-Filled-contour-plots-on-the-same-device-td819040.html).
Has a better approach been
Hi,
Try:
data.frame(Forecast=with(PeriodSKUForecast,tapply(Forecast,SKU,head,1)))
# Forecast
#A1 99
#K2 207
#X4 63
#or
aggregate(Forecast~SKU,data=PeriodSKUForecast,head,1)
# SKU Forecast
#1 A1 99
#2 K2 207
#3 X4 63
#or
library(plyr)
ddply(PeriodSKUForecas
I have a large Excel file with SKU numbers (stock keeping units) and
forecasts which can be mimicked with the following:
Period <- c(1, 2, 3, 1, 2, 3, 4, 1, 2)
SKU <- c("A1","A1","A1","X4","X4","X4","X4","K2","K2")
Forecast <- c(99, 103, 128, 63, 69, 72, 75, 207, 201)
PeriodSKUForecast <- data.fra
On Fri, Mar 15, 2013 at 12:30 PM, Bert Gunter wrote:
> Simple -- don't make a pie chart.
This is great advice. But it you (or your boss) insists on pie charts,
then you should provide us with a reproducible example that
illustrates your problem.
dat <- read.table(text="Productpredicted_Marke
On 15-03-2013, at 17:08, Ray Cheung wrote:
> Dear All,
>
> I've an array with some missing values (NA) in between. I want to remove
> that particular matrix if a missing value is detected. How can I do so?
> Thank you very much.
It is not clear what the dimension of your array is.
If your ar
Simple -- don't make a pie chart.
-- Bert
(Seriously -- this is an awful display. Consider, instead, a bar plot
plotting cumulative sums of percentages with products/bars ordered from
largest percentage to smallest; or plotting just the percentages in that
order, depending on which is more inform
HI,
Try this:
set.seed(25)
arr1<- array(sample(c(1:40,NA),60,replace=TRUE),dim=c(5,4,3))
arr1[,,sapply(seq(dim(arr1)[3]),function(i) all(!is.na(arr1[,,i])))]
# [,1] [,2] [,3] [,4]
#[1,] 2 13 34 17
#[2,] 19 3 15 39
#[3,] 4 25 10 16
#[4,] 7 22 5 7
#[5,] 12
Dear All,
I've an array with some missing values (NA) in between. I want to remove
that particular matrix if a missing value is detected. How can I do so?
Thank you very much.
Best regards,
Ray
[[alternative HTML version deleted]]
__
R-help@r-
This might get you started, but more data is needed to test this
# First create the data
Collars <- data.frame(collar=1:5, date=as.POSIXlt(c("01/01/2013", "02/01/2013",
"04/01/2013", "04/01/2013", "07/01/2013"), format="%m/%d/%y"),
data=letters[c(24:26, 1:2)])
Animals <- data.frame(anima
Hi all,
I need to do (normalized) 2-D cross-correlation in R. There is a convenient
function available in Matlab (see:
http://www.mathworks.de/de/help/images/ref/normxcorr2.html).
Is there anything comparable in R available?
Thanks,
Felix
[[alternative HTML version deleted]]
__
Dear Owen,
What is your definition of "multivariate analysis"? Do you mean: A
meta-regression model with more than one predictor/moderator? In that case,
yes, metafor handles that. Usually, this is referred to as "multiple
regression" (as opposed to "simple regression" with a single predictor)
I was too quick on the Send button. Xtabs produces a table. If you want a
data.frame, it would be data.frame(xtabs(Count~Class+X, D)):
# Match John's summary table and generate Counts
> set.seed(42)
> Count <- sample(1:50, 23)
> Class <- c(rep(1, 4), rep(2, 7), 3, rep(1, 3), rep(3, 4), rep(3, 4))
Is
?expand.grid
what you are looking for?
Rgds,
Rainer
On Friday 15 March 2013 09:22:15 Amir wrote:
> Hi every one,
>
> I have two sets T1={c1,c2,..,cn} and T2={k1,k2,...,kn}.
> How can I find the sets as follow:
>
> (c1,k1), (c1,k2) ...(c1,kn) (c2,k1) (c2,k2) (c2,kn) ... (cn,kn)
>
At Fri, 15 Mar 2013 09:22:15 -0400,
Amir wrote:
> I have two sets T1={c1,c2,..,cn} and T2={k1,k2,...,kn}.
> How can I find the sets as follow:
>
> (c1,k1), (c1,k2) ...(c1,kn) (c2,k1) (c2,k2) (c2,kn) ... (cn,kn)
I think you are looking for expand.grid:
expand.grid(1:3, 10:13)
Var1 Var2
1
HI,
Try this:
T1<- paste0("c",1:5)
T2<- paste0("k",1:5)
as.vector(outer(T1,T2,paste,sep=","))
# [1] "c1,k1" "c2,k1" "c3,k1" "c4,k1" "c5,k1" "c1,k2" "c2,k2" "c3,k2" "c4,k2"
#[10] "c5,k2" "c1,k3" "c2,k3" "c3,k3" "c4,k3" "c5,k3" "c1,k4" "c2,k4" "c3,k4"
#[19] "c4,k4" "c5,k4" "c1,k5" "c2,k5" "c3,k5"
As Gabor indicates, using a start based on a good approximation is
usually helpful, and nls() will generally find solutions to problems
where there are such starts, hence the SelfStart methods. The Marquardt
approaches are more of a pit-bull approach to the original
specification. They grind aw
Hi,
You could try this for multiple intersect:
dt[Reduce(function(...) intersect(...),
list(grep(par.fund,fund),grep(par.func,func),grep(par.obj,obj))),sum(amount),by=c('code','year')]
# code year V1
#1: 1001 2011 123528
#2: 1001 2012 97362
#3: 1002 2011 103811
#4: 1002 2012 97179
dt
Wouldn't this do the same thing?
xtabs(Count~Class+X, D)
--
David L Carlson
Associate Professor of Anthropology
Texas A&M University
College Station, TX 77843-4352
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-bounce
Hello John,
I thought I attached the file. So here we go:
Class=c(1,1,1,1,1,1,1,2,2,2,2,2,2,2,3,3,
3,3,3,3,3,3,3)
X=c(0.1,0.1,0.1,0.1,0.2,0.2,0.2,0.1,0.1,
0.1,0.1,0.1,0.1,0.1,0.1,0.2,0.2,0.2,0.2,0.3,0.3,0.3,0.3)
Count=c(1,1,1,1,1,1,1,
Hi every one,
I have two sets T1={c1,c2,..,cn} and T2={k1,k2,...,kn}.
How can I find the sets as follow:
(c1,k1), (c1,k2) ...(c1,kn) (c2,k1) (c2,k2) (c2,kn) ... (cn,kn)
Thanks.
Amir
--
__
Amir Darehshoorzadeh
> My question: what does it mean asymmetry distribution could
> affect PCA ? and also outliers could affect factors?
It means what it says. PCA will be affected by asymmetry and outliers will
affect the principal components (sometimes loosely called 'factors') In
particular an extreme outl
I don't see how the data in the three column table you present is enough to
produce the four column test. Should the first table actually show repeated
collar usage so that you can use the next incidence of the collar as the end
date e.g
1 01/01/2013
1 02/04/2013
and so on?
Some actual data
I think the way I set up my sample data without any explanation confused things
slightly. These data might make things clearer:
# Create fake data
df <- data.frame(code = c(rep(1001, 8), rep(1002, 8)),
year = rep(c(rep(2011, 4), rep(2012, 4)), 2),
fund = re
On Fri, Mar 15, 2013 at 9:45 AM, Prof J C Nash (U30A) wrote:
> Actually, it likely won't matter where you start. The Gauss-Newton direction
> is nearly always close to 90 degrees from the gradient, as seen by turning
> trace=TRUE in the package nlmrt function nlxb(), which does a safeguarded
> Mar
The first thing you are missing is the documentation -- try ?survfit.object.
fit <- survfit(Surv(time,status)~1,data)
fit$std.err will contain the standard error of the cumulative hazard or
-log(survival)
The standard error of the survival curve is approximately S(t) * std(hazard), by the d
I have the following dataframe:
Productpredicted_MarketShare Predicted_MS_Percentage
A2.827450e-02 2.8
B4.716403e-06 0.0
C1.741686e-01 17.4
D
Thanks a lot!
-Original Message-
From: John Kane [mailto:jrkrid...@inbox.com]
Sent: 15 March 2013 13:41
To: Blaser Nello; IOANNA; r-help@r-project.org
Subject: Re: [R] Data manipulation
Nice. That does look like it. IOANNA?
John Kane
Kingston ON Canada
> -Original Message-
>
I think this is more a question for something like Cross Validated but you
may well get a hint or two here. Unfortunately while I vaguely see what the
reviewer is getting at I certainly don't know enough to help.
John Kane
Kingston ON Canada
-Original Message-
Fro
Actually, it likely won't matter where you start. The Gauss-Newton
direction is nearly always close to 90 degrees from the gradient, as
seen by turning trace=TRUE in the package nlmrt function nlxb(), which
does a safeguarded Marquardt calculation. This can be used in place of
nls(), except you
Nice. That does look like it. IOANNA?
John Kane
Kingston ON Canada
> -Original Message-
> From: nbla...@ispm.unibe.ch
> Sent: Fri, 15 Mar 2013 14:27:03 +0100
> To: ii54...@msn.com, r-help@r-project.org
> Subject: Re: [R] Data manipulation
>
> Is this what you want to do?
>
> D2 <- expa
Thanks John for your reply.
the reviewer comment:
asymmetric distribution could affect Principal
Component Analysis results, symmetry of distribution should be
tested. Authors should also indicate if outliers were observed and
consequently excluded because they could affect factors
My question
Hi IOANNA,
I got the data but it is missing a value in Count (length 22 vs length 23 in
the other two variable so I stuck in an extra 1. I hope this is correct.
There also was an attachement called winmail.dat that appears to be some kind
of MicroSoft Mail note that is pure gibberish to me--I'm
Dear R users,
I am trying to use "gamm" from package "mgcv" to model results from a mesocosm
experiment. My model is of type
M1 <- gamm(Resp ~ s(Day, k=8) + s(Day, by=C, k=8) + Flow + offset(LogVol),
data=MyResp,
correlation = corAR1(form= ~ Day|Mesocosm),
f
Is this what you want to do?
D2 <- expand.grid(Class=unique(D$Class), X=unique(D$X))
D2 <- merge(D2, D, all=TRUE)
D2$Count[is.na(D2$Count)] <- 0
W <- aggregate(D2$Count, list(D2$Class, D2$X), "sum")
W
Best,
Nello
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-bou
Dear Metafor users, I'm conducting a metaanalysis of prevalence of a particular
behaviour based on someone elses' code. I've been labouring under the
impression that this:
summary(rma.1<-rma(yi,vi,mods=cbind(approxmeanage,interviewmethodcode),data=mal,method="DL",knha=F,weighted=F,intercept=T
Well you can write it there but it won't do anything until read into some
software that can interpret it as a url. A csv file is just plain text.
John Kane
Kingston ON Canada
> -Original Message-
> From: bsmith030...@gmail.com
> Sent: Fri, 15 Mar 2013 07:53:02 -0400
> To: r-help@r-proj
Hi,
I am looking for a good tutorial on the ff package. Any suggestions?
Also, any other package would anyone recommend for dealing with data that
extends beyond the RAM would be greatly appreciated.
Thanks,
Fritz Zuhl
__
R-help@r-project.org mailing
No idea of what sentence. R-help strips any html and only provides a text
message so all formatting has been lost. I think the question is not really an
R-help question but if you resubmit the post you need to show the sentence in
question in another way.
John Kane
Kingston ON Canada
>
dat1<- read.table(text="
Product Price Year_Month PE
A 100 201012 -2
A 98 201101 -3
A 97 201102 -2.5
B 110 201101 -1
B 100 201102 -2
B
On Fri, Mar 15, 2013 at 10:52 AM, Brian Smith wrote:
> Hi,
>
> I was wondering if it is possible to create a hyperlink in a csv file using
> R code and some package. For example, in the following code:
A csv file is a plan text file and by definition doesn't have
hyperlinks. If you want a hyperl
What zero values? And are they acutall zeros or are the NA's, that is, missing
values?
The code looks okay but without some sample data it is difficult to know
exactly what you are doing.
The easiest way to supply data is to use the dput() function. Example with
your file named "testfile":
Hello all,
I would appreciate your thoughts on a seemingly simple problem. I have a
database, where each row represent a single record. I want to aggregate this
database so I use the aggregate command :
D<-read.csv("C:\\Users\\test.csv")
attach(D)
by1<-factor(Class)
by2<-factor(X)
On 03/15/2013 01:40 AM, Gian Maria Niccolò Benucci wrote:
Hi again,
Thank you all for your support. I would love to have a graph in which two
variables are contemporary showed. For example a histogram and a curve
should be the perfect choice. I tried to use twoord.plot() but I am not
sure I unde
On 15/03/2013 10:40, Jannis wrote:
Dear all,
thanks, Rolf and Jeff, for your replies. The command below runs under
Suse Linux. I guess, hoewever, the phenomena I observed would heappen
under other oprating systems as well. The reason why I asked was that R
produced some error messages that did n
On 15-03-2013, at 10:56, Tammy Ma wrote:
> Hi,
>
> I have data frame like this:
>
> Product PriceYear_Month PE
> A 100201012 -2
> A 98 201101-3
> A 97 201102-2.5
> B 110
Hi,
I was wondering if it is possible to create a hyperlink in a csv file using
R code and some package. For example, in the following code:
links <- cbind(rep('Click for Google',3),"google search address goes here")
## R Mailing list blocks if I put the actual web address here
write.table(links,
Try this:
> x <- read.table(text = "Product PriceYear_Month PE
+ A 100201012 -2
+ A 98 201101-3
+ A 97 201102-2.5
+ B 110 201101-1
+ B 100
Hi,
I was wondering if it is possible to create a hyperlink in a csv file using
R code and some package. For example, in the following code:
links <- cbind(rep('Click for Google',3),"http://www.google.com";)
write.table(links,'test.csv',sep=',',row.names=F,col.names=F)
the web address should be
Dear all,
thanks, Rolf and Jeff, for your replies. The command below runs under
Suse Linux. I guess, hoewever, the phenomena I observed would heappen
under other oprating systems as well. The reason why I asked was that R
produced some error messages that did not really point me to the
direct
Could someone explain me this sentence reviewer below in blod underlined,
Authors should try to be more detailed in the description of analyses:
some of the details reported in the "Principal components analysis"
paragraph (Results) should be moved here.
Because a highly_/*asymmetric distribution
Hi,
I have data frame like this:
Product PriceYear_Month PE
A 100201012 -2
A 98 201101-3
A 97 201102-2.5
B 110 201101-1
B 100 201102
Dear R users,
The following issue has been already documented, but, if I am not
mistaken, not yet solved.
This issue appears while trying to plot arrows with "geom_segment"
(package ggplot2), with polar coordinates ("coord_polar"). The direction
of some arrows is wrong (red rectangle). Pleas
Le 14/03/13 18:15, Zhuoting Wu a écrit :
I have two follow-up questions:
1. If I want to reverse the heat.colors (i.e., from yellow to red
instead of red to yellow), is there a way to do that?
nbcol <- heat.colors(128)
nbcol <- nbcol[128:1]
2. I also created this interactive 3d scatter plo
> "eh" == elliott harrison
> on Fri, 15 Mar 2013 08:52:36 + writes:
eh> Hi,
eh> I am attempting to use phyper to test the significance
eh> of two overlapping lists. I keep getting a zero and
eh> wondered if that was determining non-significance of my
eh> overla
Thanks Michael I assumed as much but we know what that did.
Thanks again.
Elliott
-Original Message-
From: R. Michael Weylandt [mailto:michael.weyla...@gmail.com]
Sent: 15 March 2013 09:29
To: elliott harrison
Cc: r-help@r-project.org
Subject: Re: [R] phyper returning zero
On Fri, Mar
On Fri, Mar 15, 2013 at 8:52 AM, elliott harrison
wrote:
> Hi,
> I am attempting to use phyper to test the significance of two overlapping
> lists. I keep getting a zero and wondered if that was determining
> non-significance of my overlap or a p-value too small to calculate?
>
> overlap = 524
>
with(price.lookup, list(Price_Line)) is a list! Use
unique(unlist(with(price.lookup, list(Price_Line
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Barry King
Sent: Freitag, 15. März 2013 09:34
To: r-help@r-project.org
Subject
Thank you very much to you all, I'll play the codes and post my code once I
have tested it.
Cheers,
--
Gian
On 14 March 2013 16:27, John Kane wrote:
>
> The easiest way to supply data is to use the dput() function. Example
> with your file named "testfile":
> dput(testfile)
> Then copy
Hi,
I am attempting to use phyper to test the significance of two overlapping
lists. I keep getting a zero and wondered if that was determining
non-significance of my overlap or a p-value too small to calculate?
overlap = 524
lista = 2784
totalpop = 54675
listb = 1296
phyper(overlap, lista, tot
I need to extract labels from Excel input data to use as dimnames later on.
I can successfully read the Excel data into three matrices:
capacity <- read.csv("c:\\R\\data\\capacity.csv")
price.lookup <- read.csv("c:\\R\\data\\price lookup.csv")
sales <- read.csv("c:\\R\\data\\sales.csv")
The value
88 matches
Mail list logo