Hello,
Yesterday wasn't one of my days.
The main problem I'm seeing is that the KS statistic is meant for
continuous data and you have counts data assumed to follow a Poisson
distribution. This might explain the nonsense results you are getting
from ks.test.
Have you considered a
Hello and thanks for your patience.
As far as I understand, the paper of Marsiglia and colleagues refers to CDF
samples (i.e. from a hypothetical distribution — e.g. a Poisson), while I have
an ECDF sample (i.e. (pseudo-)observed data — e.g. rpois(1000, 500). In my
study, I am actually
Hello,
Inline.
Às 20:09 de 05/09/19, Boo G. escreveu:
Hello again.
I have tied this before but I see two problems:
1) According to the documentation I could read (including the ks.test
code), the ks statistic would be max(abs(x - y)) and if you plot this
for very low sample sizes you can
Hello again.
I have tied this before but I see two problems:
1) According to the documentation I could read (including the ks.test code),
the ks statistic would be max(abs(x - y)) and if you plot this for very low
sample sizes you can actually see that this make sense. The results of
Hello,
I'm sorry, but apparently I missed the point of your problem.
Please do not take my previous answer seriously.
But you can use ks.test, just in a different way than what I wrote
previously.
Corrected code:
#simulation
for (i in 1:1000) {
#sample from the reference distribution
Thanks for your reply, Rui.
I don’t think that I can use directly the ks.test because I have a weighted
sample (see m_2 <-m_1[(sample(nrow(m_1), size=i, prob=p_1, replace=F)),]) and
I want to account for that. That’s why I am trying to compute everything
manually.
Also, if you look at the
Hello,
I don't have the algorithms at hand but the KS statistic calculation is
more complicated than your max/abs difference.
Anyway, why not use ks.test? it's not that difficult:
set.seed(1234)
#reference distribution
d_1 <- sort(rpois(1000, 500))
p_1 <- d_1/sum(d_1)
m_1 <- data.frame(d_1,
Hello,
I am trying to perform a Kolmogorov–Smirnov test to assess the difference
between a distribution and samples drawn proportionally to size of different
sizes. I managed to compute the Kolmogorov–Smirnov distance but I am lost with
the p-value. I have looked into the ks.test function
oject.org<mailto:r-help-requ...@r-project.org> wrote:
From: Heinz Tuechler <tuech...@gmx.at<mailto:tuech...@gmx.at>>
Subject: Re: [R] p values from GLM
Date: 3 April 2016 11:00:50 NZST
To: Bert Gunter <bgunter.4...@gmail.com<mailto:bgunter.4...@gmail.com>>, Du
> On 03 Apr 2016, at 01:00 , Heinz Tuechler wrote:
>
>
> Bert Gunter wrote on 01.04.2016 23:46:
>> ... of course, whether one **should** get them is questionable...
>>
>> http://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503#/ref-link-1
>>
>
Bert Gunter wrote on 01.04.2016 23:46:
... of course, whether one **should** get them is questionable...
http://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503#/ref-link-1
This paper repeats the common place statement that a small p-value does
not necessarily
Maybe it's not the article itself for sale. Sometimes a company will
charge a fee to have access to its knowledge base. Not because it owns all
of the content, but because the articles, publications, etc have been
tracked down and centralized. This is also the whole idea behind paying a
company
On 4/2/2016 11:07 AM, David Winsemius wrote:
On Apr 1, 2016, at 5:01 PM, Duncan Murdoch wrote:
On 01/04/2016 6:46 PM, Bert Gunter wrote:
... of course, whether one **should** get them is questionable...
They're just statistics. How could it hurt to look at them?
> On Apr 1, 2016, at 5:01 PM, Duncan Murdoch wrote:
>
> On 01/04/2016 6:46 PM, Bert Gunter wrote:
>> ... of course, whether one **should** get them is questionable...
>
> They're just statistics. How could it hurt to look at them?
Like Rolf, I thought that this
Because they are Medusa statistics?
--
Sent from my phone. Please excuse my brevity.
On April 1, 2016 5:01:12 PM PDT, Duncan Murdoch
wrote:
>On 01/04/2016 6:46 PM, Bert Gunter wrote:
>> ... of course, whether one **should** get them is questionable...
>
>They're just
On 01/04/2016 6:46 PM, Bert Gunter wrote:
... of course, whether one **should** get them is questionable...
They're just statistics. How could it hurt to look at them?
Duncan Murdoch
http://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503#/ref-link-1
... of course, whether one **should** get them is questionable...
http://www.nature.com/news/statisticians-issue-warning-over-misuse-of-p-values-1.19503#/ref-link-1
Cheers,
Bert
Bert Gunter
"The trouble with having an open mind is that people keep coming along
and sticking things into it."
On 01/04/2016 6:14 PM, John Sorkin wrote:
> How can I get the p values from a glm ? I want to get the p values so
I can add them to a custom report
>
>
> fitwean<-
glm(data[,"JWean"]~data[,"Group"],data=data,family=binomial(link ="logit"))
> summary(fitwean) # This lists the
How can I get the p values from a glm ? I want to get the p values so I can add
them to a custom report
fitwean<- glm(data[,"JWean"]~data[,"Group"],data=data,family=binomial(link
="logit"))
summary(fitwean) # This lists the coefficeints, SEs, z and p
values, but I can't
Hello everyone, My question is about the rptR package. I am using the
rpt.adj function (for adjusted repeatability).
My model is as follows:
rpt.adj(brifield ~ 1 +
(1|ring)+(1|year),ring,data=bellyfield,datatype=Gaussian)
where brifield is the measurement for which I want to
Dear List,
This is one of those questions that are easier to be explained with an example.
I also apologise as strictly speaking this is not an R question, but more
general methodological question that I would like to solve in R.
Suppose we have the following data
#Generate artificial data#
Dear Tal:
Thank you for your help.
Thats what I run:
install.packages(corpcor)
require(corpcor)
correlations=cor(mydata)
pcorrrel = cor2pcor(correlations); pcorrrel
2013/8/7 Tal Galili tal.gal...@gmail.com
A short self contained code would help us help you.
You can try using str on
I am not an expert on shrinkage estimators of partial correlations
(such as the one in corpcor), but my sense is that it is difficult to
provide a good estimate of a p-value. You could try to email the
authors of the package and ask them, but this may be more of a
statistics rather than R
Dear:
I needed to calculate partial correlations and used the package corpcor for
that purpose.
The output doesnot provide p values and I was unable to find information or
posts on how to get them.
Does someone can help me?
Thanks.
--
Dr. Demetrio Luis Guadagnin
Conservação e Manejo de Vida
A short self contained code would help us help you.
You can try using str on the output of the command you are using, and try
to understand where the p.value is located.
Tal
Contact
Details:---
Contact me:
Dear All,
I performed thousands of testings and obtained p-values.
And then I did two-sided uniform KS test of the p-values, the result
claimed it is uniform.
So does it mean that my model are wrong? Because I expect more small
p-values near 0.
This is a preliminary step before correcting the
Your question has effectively nothing to do with R, so is off topic
here. Post on a statistics or bioinformatics site like
stats.stackexchange.com .
I'll just mention that this is a ubiquitous issue in, e.g. gene
testing (pn), is controversial, philosophical, and complex. You may
have a good bit
On Jun 19, 2013, at 15:02 , R. Michael Weylandt wrote:
On Wed, Jun 19, 2013 at 10:27 AM, meng laomen...@163.com wrote:
Hi all:
I met a question about lmer.
fm1 - lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
summary(fm1)
...
Fixed effects:
Estimate Std. Error t
Hi all:
I met a question about lmer.
fm1 - lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
summary(fm1)
...
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.825 36.84
Days 10.467 1.5466.77
...
My question:
Why p values of (Intercept)
Try
library(lmerTest)
fm1 - lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
summary(fm1)
Linear mixed model fit by REML
Formula: Reaction ~ Days + (Days | Subject)
Data: sleepstudy
AIC BIC logLik deviance REMLdev
1756 1775 -871.8 17521744
Random effects:
Groups Name
On Wed, Jun 19, 2013 at 10:27 AM, meng laomen...@163.com wrote:
Hi all:
I met a question about lmer.
fm1 - lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
summary(fm1)
...
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.405 6.825 36.84
Days
AIC is a different story. To do hypothesis tests on terms, use anova()
or dropterm() (as done in the book):
library(MASS)
example(polr)
dropterm(house.plr, test = Chisq)
Single term deletions
Model:
Sat ~ Infl + Type + Cont
DfAIC LRT Pr(Chi)
none3495.1
Infl2 3599.4
On 28/05/2013 06:54, David Winsemius wrote:
On May 27, 2013, at 7:59 PM, meng wrote:
Hi all:
As to the polr {MASS} function, how to find out p values of every
parameter?
From the example of R help:
house.plr - polr(Sat ~ Infl + Type + Cont, weights = Freq, data =
housing)
On May 27, 2013, at 11:05 PM, Prof Brian Ripley wrote:
On 28/05/2013 06:54, David Winsemius wrote:
On May 27, 2013, at 7:59 PM, meng wrote:
Hi all:
As to the polr {MASS} function, how to find out p values of every
parameter?
From the example of R help:
house.plr - polr(Sat ~ Infl
How to get p values from the result then?
At 2013-05-28 13:54:27,David Winsemius dwinsem...@comcast.net wrote:
On May 27, 2013, at 7:59 PM, meng wrote:
Hi all:
As to the polr {MASS} function, how to find out p values of every
parameter?
From the example of R help:
house.plr -
Hi all:
As to the polr {MASS} function, how to find out p values of every parameter?
From the example of R help:
house.plr - polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
summary(house.plr)
How to find out the p values of house.plr?
Many thanks.
Best.
On May 27, 2013, at 7:59 PM, meng wrote:
Hi all:
As to the polr {MASS} function, how to find out p values of every
parameter?
From the example of R help:
house.plr - polr(Sat ~ Infl + Type + Cont, weights = Freq, data =
housing)
summary(house.plr)
How to find out the p values of
I have used the correlation analysis (pearson) in the agricolae package to
analyse my data and got unexpectedly low p-values (therefore making many
more highly significant correlations in my data than I had expected). I am
wondering if the p-values given should be subtracted from 1 to give the
: Sat, 15 Sep 2012 02:52:38 -0700 (PDT)
To: r-help@r-project.org
Subject: [R] p-values in agricolae pearson correlation
I have used the correlation analysis (pearson) in the agricolae package
to
analyse my data and got unexpectedly low p-values (therefore making many
more highly significant
Hi everyone!
Can anyone tell me, how to obtain p.values from a linear model?
Example:
mod1-lm(dV~iV1+iV2)
Now, I can get the coefficients with mod1$coef
But how can I get p-values? ($p.values seems to work with cor.test() only)
Thank you!
[[alternative HTML version deleted]]
Hello,
What do you want to do with these p-values?
Best Regards
Le 12/06/14 19:44, David Studer a écrit :
Hi everyone!
Can anyone tell me, how to obtain p.values from a linear model?
Example:
mod1-lm(dV~iV1+iV2)
Now, I can get the coefficients with mod1$coef
But how can I get p-values?
Dear David,
Try
summary(mod1)$coef[,4]
Best
Ozgur
--
View this message in context:
http://r.789695.n4.nabble.com/p-values-from-lm-tp4633357p4633361.html
Sent from the R help mailing list archive at Nabble.com.
__
R-help@r-project.org mailing list
Hi David,
summary(res)$coefficients[,Pr(|z|)] or summary(res)$coefficients[,4]
M
Regrads
Le 14/06/12 12:44, David Studer a écrit :
Hi everyone!
Can anyone tell me, how to obtain p.values from a linear model?
Example:
mod1-lm(dV~iV1+iV2)
Now, I can get the coefficients with mod1$coef
: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of arunkumar
Sent: Thursday, December 22, 2011 9:13 PM
To: r-help@r-project.org
Subject: [R] p values in lmer
hi
How to get p-values for lmer funtion other than pvals.fnc(), since it takes
long time for execution
hi
How to get p-values for lmer funtion other than pvals.fnc(), since it takes
long time for execution
-
Thanks in Advance
Arun
--
View this message in context:
http://r.789695.n4.nabble.com/p-values-in-lmer-tp4227434p4227434.html
Sent from the R help mailing list archive at
On Dec 22, 2011, at 11:13 PM, arunkumar wrote:
hi
How to get p-values for lmer funtion other than pvals.fnc(), since
it takes
long time for execution
http://psy-ed.wikidot.com/glmm
http://glmm.wikidot.com/faq
--
David Winsemius, MD
West Hartford, CT
1) The p values in the printout are a Wald test. The Wald, score, and
likelihood ratio tests are asymptotically equivalent, but may differ
somewhat in finite samples. (The Wald and score are both Taylor series
approximations to the LR). If you want to do an LR test, fit the two
models and use
Hi,
I'm interested in building a Cox PH model for survival modeling, using 2
covariates (x1 and x2). x1 represents a 'baseline' covariate, whereas x2
represents a 'new' covariate, and my goal is to figure out where x2 adds
significant predictive information over x1.
Ideally, I could get a
To: r-h...@stat.math.ethz.ch
Subject: Re: [R] p values greater than 1 from lme4
RTSlider rob.t.slider at gmail.com writes:
Hello,
I'm running linear regressions using the following script where I have
separated out species using the IDtotsInLn identifier
x-read.csv('tbl02TOTSInLn_ENV.csv
-Original Message- From: r-help-boun...@r-project.org
[mailto:r-help-boun...@r-project.org] On Behalf Of Ben Bolker Sent:
06 September 2011 02:58 To: r-h...@stat.math.ethz.ch Subject: Re:
[R] p values greater than 1 from lme4
RTSlider rob.t.slider at gmail.com writes:
Hello
Hello,
I'm running linear regressions using the following script where I have
separated out species using the IDtotsInLn identifier
x-read.csv('tbl02TOTSInLn_ENV.csv', header=T)
x
attach (x)
library(lme4)
RTSlider rob.t.slider at gmail.com writes:
Hello,
I'm running linear regressions using the following script where I have
separated out species using the IDtotsInLn identifier
x-read.csv('tbl02TOTSInLn_ENV.csv', header=T)
x
attach (x)
Hi Ben,
I've attached the data and Tinn-R file I've been using to run it,
thank you for the fast response. Please let me know if you see
something odd, I have no idea where to begin looking
-Rob
On Mon, Sep 5, 2011 at 5:30 PM, bbolker [via R]
ml-node+3792187-1531262488-213...@n4.nabble.com wrote:
Hi ,
I know this question has been asked twice in the past but to my knowldege,
it still hasn't been solved.
I am doing a zero inflated binomial model using the VGAM package, I need to
obtain p values for my Tvalues in the vglm output. code is as follows
On Oct 15, 2010, at 9:21 AM, ?hagen Patrik wrote:
Dear List,
I each iteration of a simulation study, I would like to save the p-
value generated by coxph. I fail to see how to adress the p-value.
Do I have to calculate it myself from the Wald Test statistic?
No. Look at
Dear List,
I each iteration of a simulation study, I would like to save the p-value
generated by coxph. I fail to see how to adress the p-value. Do I have to
calculate it myself from the Wald Test statistic?
Cheers, Paddy
__
R-help@r-project.org
On Oct 15, 2010, at 9:21 AM, Öhagen Patrik wrote:
Dear List,
I each iteration of a simulation study, I would like to save the p-
value generated by coxph. I fail to see how to adress the p-value.
Do I have to calculate it myself from the Wald Test statistic?
No. And the most important
Hi,
if you look at the first image (Image1) you see that there are 2 main
clusters 7 and 8
I wanted to use pvclust to calculate a p-value whether these clusters are
due to chance
or statistically significant. Unfortunately pvclust does not provide a
p-value for the first
brunch (7 and 8).
So
Hi,
if you look at the first image (Image1) you see that there are 2 main
clusters 7 and 8
I wanted to use pvclust to calculate a p-value whether these clusters are
due to chance
or statistically significant. Unfortunately pvclust does not provide a
p-value for the first
brunch (7 and 8).
So
-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of syrvn
Sent: Tuesday, August 10, 2010 6:45 AM
To: r-help@r-project.org
Subject: [R] p-values with pvclust
Hi,
if you look at the first image (Image1) you see that there are 2 main
clusters 7 and 8
I wanted
Id cat1locationitem_values p-valuessequence
a1111 3002737 0.196504377 0.011
a1121 3017821 0.196504377 0.052
a1131 3027730 0.196504377 0.023
a1141 3036220 0.196504377 0.044
a1151 3053984
Hi,
I am new to clustering and was wondering why pvclust using maximum
as distance measure nearly always results in p-values above 95%.
I wrote an example programme which demonstrates this effect. I
uploaded a PDF showing the results
Here is the code which produces the PDF file:
Hi,
I am new to clustering and was wondering why pvclust using maximum
as distance measure nearly always results in p-values above 95%.
I wrote an example programme which demonstrates this effect. I
uploaded a PDF showing the results
Here is the code which produces the PDF file:
Sent: Wed, May 19, 2010 3:31:26 PM
Subject: Re: [R] p-values 2.2e-16 not reported
Dear all,
thanks for your feedback so far. With the help of a colleague I
think I found the solution to my problem:
pt(10,100,lower=FALSE)
[1] 4.950844e-17
IS *NOT* EQUAL TO
1-pt(10,100,lower=TRUE)
[1] 0
On 2010-05-20 08:52, Shi, Tao wrote:
Will,
I'm wondering if you have any
insights after looking at the cor.test source code. It seems to be fine to me, as the p
value is either calculated by your first method or a
.C code.
...Tao
Dear Tao,
I think the described problem of p-values
Dear all,
how can I get the exact p-value of a statistical test like cor.test() if
the p-value is below the default machine epsilon value of
.Machine$double.eps = 2.220446e-16?
At the moment smaller p-values are reported as p-value 2.2e-16.
.Machine$double.eps - 1E-100 does not solve this
On Wed, May 19, 2010 at 10:53 AM, Will Eagle will.ea...@gmx.net wrote:
Dear all,
how can I get the exact p-value of a statistical test like cor.test() if the
p-value is below the default machine epsilon value of .Machine$double.eps =
2.220446e-16?
At the moment smaller p-values are
AM
To: r-help@r-project.org
Subject: [R] p-values 2.2e-16 not reported
Dear all,
how can I get the exact p-value of a statistical test like cor.test() if
the p-value is below the default machine epsilon value of
.Machine$double.eps = 2.220446e-16?
At the moment smaller p-values are reported
Dear all,
thanks for your feedback so far. With the help of a colleague I think I
found the solution to my problem:
pt(10,100,lower=FALSE)
[1] 4.950844e-17
IS *NOT* EQUAL TO
1-pt(10,100,lower=TRUE)
[1] 0
This means that R is capable of providing p-values 2.2e-16, however,
if the value
, 2010 3:50 PM
To: Greg Snow; 'Bak Kuss'; murdoch.dun...@gmail.com;
jorism...@gmail.com
Cc: R-help@r-project.org
Subject: RE: [R] P values
Inline below.
-- Bert
Bert Gunter
Genentech Nonclinical Statistics
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help
Subject: Re: [R] P values
Thank you for your replies.
As I said (wrote) before, 'I am no statistician'.
But I think I know what Random Variables are (not).
Random variables are not random, neither are they variable.
[It sounds better in french: Une variable aléatoire n'est pas
variable,
et
-help@r-project.org
Subject: Re: [R] P values
Bak,
...
Small p-values indicate a hypothesis and data that are not very
consistent and for small enough p-values we would rather disbelieve the
hypothesis (though it may still be theoretically possible).
This is false.
It is only true when
-project.org] On
Behalf Of Greg Snow
Sent: Tuesday, May 11, 2010 2:37 PM
To: Bak Kuss; murdoch.dun...@gmail.com; jorism...@gmail.com
Cc: R-help@r-project.org
Subject: Re: [R] P values
Bak,
...
Small p-values indicate a hypothesis and data that are not very
consistent and for small enough p-values
Thank you for your replies.
As I said (wrote) before, 'I am no statistician'.
But I think I know what Random Variables are (not).
Random variables are not random, neither are they variable.
[It sounds better in french: Une variable aléatoire n'est pas variable,
et n'a rien d'aléatoire.]
See
Time to rescue Random Variables before they drown!
On 09-May-10 16:53:02, Bak Kuss wrote:
Thank you for your replies.
As I said (wrote) before, 'I am no statistician'.
But I think I know what Random Variables are (not).
Random variables are not random, neither are they variable.
[It
On Sun, May 9, 2010 at 6:53 PM, Bak Kuss bakk...@gmail.com wrote:
Thank you for your replies.
As I said (wrote) before, 'I am no statistician'.
But I think I know what Random Variables are (not).
Random variables are not random, neither are they variable.
[It sounds better in french: Une
Robert A LaBudde wrote:
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow greg.s...@imail.org wrote:
Because if you use the sample standard deviation then it is a t test not
a
z test.
I'm doubting that seriously...
You calculate normalized Z-values by
On Sat, May 8, 2010 at 7:02 PM, Bak Kuss bakk...@gmail.com wrote:
Just wondering.
The smallest the p-value, the closer to 'reality' (the more accurate)
the model is supposed to (not) be (?).
How realistic is it to be that (un-) real?
That's a common misconception. A p-value expresses
On 08/05/2010 9:14 PM, Joris Meys wrote:
On Sat, May 8, 2010 at 7:02 PM, Bak Kuss bakk...@gmail.com wrote:
Just wondering.
The smallest the p-value, the closer to 'reality' (the more accurate)
the model is supposed to (not) be (?).
How realistic is it to be that (un-) real?
On May 8, 2010, at 9:38 PM, Duncan Murdoch wrote:
On 08/05/2010 9:14 PM, Joris Meys wrote:
On Sat, May 8, 2010 at 7:02 PM, Bak Kuss bakk...@gmail.com wrote:
Just wondering.
The smallest the p-value, the closer to 'reality' (the more
accurate)
the model is supposed to (not) be (?).
Dear R-Users,
This list is observed by many great statisticians and non-statisticians.
I just want to add this valuable link to this great discussion.
http://www.stat.duke.edu/~berger/p-values.html
http://www.stat.duke.edu/~berger/p-values.htmlThanks and Best Regards,
S.
On Sat, May 8, 2010 at
Robert A LaBudde wrote:
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow greg.s...@imail.org wrote:
Because if you use the sample standard deviation then it is a t test not a
z test.
I'm doubting that seriously...
You calculate normalized
At 07:10 AM 5/7/2010, Duncan Murdoch wrote:
Robert A LaBudde wrote:
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow greg.s...@imail.org wrote:
Because if you use the sample standard deviation then it is a t test not a
z test.
I'm doubting that
561-352-9699
http://www.StatisticalEngineering.com
-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Robert A LaBudde
Sent: Friday, May 07, 2010 12:29 PM
To: Duncan Murdoch
Cc: r-help@r-project.org; level
Subject: Re: [R] P values
]
On
Behalf Of Robert A LaBudde
Sent: Friday, May 07, 2010 12:29 PM
To: Duncan Murdoch
Cc: r-help@r-project.org; level
Subject: Re: [R] P values
At 07:10 AM 5/7/2010, Duncan Murdoch wrote:
Robert A LaBudde wrote:
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM
Robert A LaBudde wrote:
At 07:10 AM 5/7/2010, Duncan Murdoch wrote:
Robert A LaBudde wrote:
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow greg.s...@imail.org wrote:
Because if you use the sample standard deviation then it is a t
Why
s = 1
##
s = sd(A) #?
-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] Im
Auftrag von level
Gesendet: Mittwoch, 5. Mai 2010 22:41
An: r-help@r-project.org
Betreff: [R] P values
How do u calculated p values for a z test..
so far
-
project.org] On Behalf Of Thomas Roth
Sent: Thursday, May 06, 2010 1:58 AM
To: 'level'; r-help@r-project.org
Subject: Re: [R] P values
Why
s = 1
##
s = sd(A) #?
-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Im
Auftrag von
-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
project.org] On Behalf Of level
Sent: Wednesday, May 05, 2010 2:41 PM
To: r-help@r-project.org
Subject: [R] P values
How do u calculated p values for a z test..
so far i ve done this
A = read.table(cw3_data.txt)
xbar
To: 'level'; r-help@r-project.org
Subject: Re: [R] P values
Why
s = 1
##
s = sd(A) #?
-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Im
Auftrag von level
Gesendet: Mittwoch, 5. Mai 2010 22:41
An: r-help@r
Sent: Thursday, May 06, 2010 1:58 AM
To: 'level'; r-help@r-project.org
Subject: Re: [R] P values
Why
s = 1
##
s = sd(A) #?
-Ursprüngliche Nachricht-
Von: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
Im
Auftrag von level
Gesendet: Mittwoch
. Snow Ph.D.
Statistical Data Center
Intermountain Healthcare
greg.s...@imail.org
801.408.8111
From: Joris Meys [mailto:jorism...@gmail.com]
Sent: Thursday, May 06, 2010 12:26 PM
To: Greg Snow
Cc: Thomas Roth; level; r-help@r-project.org
Subject: Re: [R] P values
Correction, I understood you wrong
At 01:40 PM 5/6/2010, Joris Meys wrote:
On Thu, May 6, 2010 at 6:09 PM, Greg Snow greg.s...@imail.org wrote:
Because if you use the sample standard deviation then it is a t test not a
z test.
I'm doubting that seriously...
You calculate normalized Z-values by substracting the sample mean
How do u calculated p values for a z test..
so far i ve done this
A = read.table(cw3_data.txt)
xbar = mean(A)
s = 1
n = 20
mu = 0
z.test = (xbar-mu)/(s/sqrt(n))
p.value = pnorm(abs(z.test))
error = qnorm(0.99)*s/sqrt(n)
left = xbar - error
right = xbar + error
and have got values off of
Does anyone know how p-values can be generated if tsboot (stationary
bootstrap) for time series is performed?
That would be of great help. Thanks a lot for your comments.
Markus
[[alternative HTML version deleted]]
__
On 01.03.2010 09:59, Markus Troendle wrote:
Does anyone know how p-values can be generated if tsboot (stationary
bootstrap) for time series is performed?
Well, under H0 we could generate n p-values simply by
runif(n, 0, 1)
but if you want to apply some specific test, it might make sense to
Hi,
I'm a beginner using R and I'm modeling a time series with ARIMA.
I'm looking for a way to determine the p-values of the coefficients of my model.
Does ARIMA function return these values? or is there a way to determine them
easily?
Thanks for your answer
Myriam
Hi Myriam,
I'll take a stab at it, but can't offer elegance in the solution such
as the more experienced R folks might deliver.
I believe that the ARIMA function provides both point estimates and
their standard errors for the coefficients. You can use these as you
might a mean and standard
On 23/06/2009, at 11:38 AM, m.gha...@yahoo.fr wrote:
Hi,
I'm a beginner using R and I'm modeling a time series with ARIMA.
I'm looking for a way to determine the p-values of the coefficients
of my model.
Does ARIMA function return these values? or is there a way to
determine them
On Jun 6, 2009, at 4:13 AM, Emmanuel Charpentier wrote:
Dear David,
Le vendredi 05 juin 2009 à 16:18 -0400, David Winsemius a écrit :
On Jun 5, 2009, at 3:15 PM, Steven Matthew Anderson wrote:
Anyone know how to get p-values for the t-values from the
coefficients produced in vglm?
Attached
1 - 100 of 115 matches
Mail list logo