388419,(6:1.611149584,7:1.611149848):1.556474893,8:3.167624477):4.130280196,9:7.297904013):1.497063399,10:8.794967413):7.19682079,(11:2.539095678,12:2.539096008):13.45269085):12.42436025);
Dr. Ted Stankowich
Associate Professor
Department of Biological Sciences
California State University Long
Thanks - a previous response resolved the issue and I'm off and running with
the analyses.
-Original Message-
From: David Winsemius [mailto:dwinsem...@comcast.net]
Sent: Thursday, June 4, 2020 5:02 PM
To: Ted Stankowich
Cc: Rui Barradas ; William Dunlap ;
r-help@r-project.org
Subject
This worked! Thank you!
-Original Message-
From: Rui Barradas [mailto:ruipbarra...@sapo.pt]
Sent: Thursday, June 4, 2020 2:49 PM
To: Ted Stankowich ; William Dunlap
Cc: r-help@r-project.org
Subject: Re: [R] na.omit not omitting rows
CAUTION: This email was sent from an external source
ne 4, 2020 12:39 PM
To: Ted Stankowich
Cc: r-help@r-project.org
Subject: Re: [R] na.omit not omitting rows
CAUTION: This email was sent from an external source. Use caution when
replying, opening links or attachments.
Does droplevels() help?
> d <- data.frame(size = factor(c("S",
"Alouatta_macconnelli_ATELIDAE_PRIMATES"
"Alouatta_nigerrima_ATELIDAE_PRIMATES" "Ateles_fusciceps_ATELIDAE_PRIMATES"
"Callicebus_baptista_PITHECIIDAE_PRIMATES" ...
Dr. Ted Stankowich
Associate Professor
Department of Biological Sciences
California State University Long Beach
Long Beach, CA 908
the test os not valid.
You say "I'm trying to use ks.test in order to compare two curve".
When I ezecute
plot(a)
plot(b)
on your data, I see (approximately) in each case a rise from a
medium vale (~2 or ~3) to a higher vale {~6 or ~10) followed
by a dec
, then Prob[X > x1] = 0.
Hence if x0 is the minimum value such that Prob[X <= x0] = 1,
then X "can reach" x0. But for any x1 > x0, Prob[x0 < X <= x1] = 0.
Therefore, since X cannot be greater than x0, X *cannot reach* x1!
Best wishes,
Ted.
On Tue, 2018-10-23 at 12:06 +0100, Hame
Sorry -- stupid typos in my definition below!
See at ===*** below.
On Tue, 2018-10-23 at 11:41 +0100, Ted Harding wrote:
Before the ticket finally enters the waste bin, I think it is
necessary to explicitly explain what is meant by the "domain"
of a random variable. This is not (though
,1], the domain of X is Q.
Then for x <= 0 _Prob[X <= x] = 0, for 0 <= x <= 1 Prob(X >=x] = x,
for x >= 1 Prob(X <= x] = 1. These define the CDF. The set of poaaible
values of X is 1-dimensional, and is not the same as the domain of X,
which is 3-dimensional.
Hopiong this
values into two halves, the median
is not available, hence NA.
Best wishes to all,
Ted.
On Wed, 2018-08-22 at 11:24 -0400, Marc Schwartz via R-help wrote:
> Hi,
>
> It might even be worthwhile to review this recent thread on R-Devel:
>
> https://stat.ethz.ch/pipermail/r-devel/2018-Ju
Pietro,
Please post this to r-help@r-project.org
not to r-help-ow...@r-project.org
which is a mailing liat concerned with list management, and
does not deal with questions regarding the use of R.
Best wishes,
Ted.
On Sat, 2018-07-14 at 13:04 +, Pietro Fabbro via R-help wrote:
> I will
[1] NA
is not consistent with the above reasoning.
However, in my R version 2.14.0 (2011-10-31):
sum(NaN,NA)
[1] NA
sum(NA,NaN)
[1] NA
which **is** consistent! Hmmm...
Best wishes to all,
Ted.
On Wed, 2018-07-04 at 12:06 +0100, Barry Rowlingson wrote:
> I'm having deja-vu of a similar disc
On Tue, Jul 3, 2018 at 9:25 AM, J C Nash wrote:
>
> > . . . Now, to add to the controversy, how do you set a computer on fire?
> >
> > JN
Perhaps by exploring the context of this thread,
where new values strike a match with old values???
Ted
___
s not the same issue as (one of my prime hates) saying
"the data is srored in the dataframe ... ". "Data" is a
plural noun (ainguler "datum"), and I would insist on
"the data are stored ... ". The French use "une donnee" and
"les donnees";
nding of Numbers", covering the
functions ceiling(), floor(), trunc(), round(), signif().
Well worth reading!
Best wishes,
Ted.
On Thu, 2018-05-31 at 08:58 +0200, Martin Maechler wrote:
> >>>>> Ted Harding
> >>>>> on Thu, 31 May 2018 07:10:32
28 1.236 0.215 1.804 2.194
reward0.402 1.101 0.288 1.208 0.890
feedback 0.283 0.662NANA NA
goal 0.237 0.474NANA NA
Best wishes to all,
Ted.
On Thu, 2018-05-31 at 15:30 +1000, Jim Lemon wrote:
> Hi Joshua,
> Because there are no values in column ddd less th
Apologies for disturbance! Just checking that I can
get through to r-help.
Ted.
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
t; "@".
Once they have the address then anything can happen!
Best wishes,
Ted (eagerly awaiting attempted seduction ... ).
On Wed, 2018-04-18 at 10:36 +, Fowler, Mark wrote:
> Seems it must be the R-list. A horde of ‘solicitation’ emails began arriving
> about 27 minutes after
= i+1 ; print(i)
}
# [1] 3
# [1] 4
# [1] 5
# [1] 6
# Error in while (x[i] <= 5) { : missing value where TRUE/FALSE needed
So everything is fine so long as i <= 5 (i.e. x[i] <= 5),
but then the loop sets i = 6. and then:
i
# [1] 6
x[i]
# [1] NA
x[i] <= 5
# [1] NA
Helpful?
Best
t nothing. Compare with:
x <- "testing"
k <- nchar(x)
for (i in 1:k) {
y <- substr(x, i, i) ### was: substr(x, i, 1)
print(y)
}
[1] "t"
[1] "e"
[1] "s"
[1] "t"
[1] "i"
[1] "n"
[1] "g"
Hoping
Suzen, thank you very much for your so useful information (I will try to
understand it)!
And my sincere gratitude to the moderator!
>"Suzen, Mehmet" < msu...@gmail.com >:
>I also suggest you Hadley's optimized package for interoperating xls
>files with R:
>https://github.com/tidyverse/readxl
"Data set flchain available in the survival package". How can I get it (from
R) as Excel file? Thanks!
[[alternative HTML version deleted]]
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
Many thanks, Jim!!!
>Jim Lemon < drjimle...@gmail.com >:
>Have a look at axis.mult in the plotrix package.
>Jim
>>iPad via R-help < r-help@r-project.org > wrote:
>> How to multiplying y-axis ticks value by 100 (without put the % symbol next
>> to the number) here:
>> plot (CI.overall,
> > Yes. R uses standard 32-bit double precision.
>
>
> Well, for large values of 32... such as 64.
Hmmm ... Peter, as one of your compatriots (guess who) once solemnly
said to me:
2 plus 2 is never equal to 5 -- not even for large values of 2.
Best wishes,
Ted.
that FALSE & NA = FALS£.
On the other hand, if with the "missing" interpretation of "NA"
we don't even know that it is a logical, then it might be fair
enough to say FALSE & NA = NA.
Ted.
[Additional thought]:
Testing to see what would happen if the NA were not loigical,
s normally "almost" trivial feature can, for such a simple
calculation, lead to chaos or catastrophe (in the literal technical
sense).
For more detail, including an extension of the above, look at the
original posting in the R-help archives for Dec 22, 2013:
From: (Ted Harding) <ted.hard.
hen either have
a leading 0 or not.In that case, I think Jim's solution is safer!
Best wishes,
Ted.
On 07-Feb-2017 16:02:18 Bert Gunter wrote:
> No need for sprintf(). Simply:
>
>> paste0("DQ0",seq.int(60054,60060))
>
> [1] "DQ060054" "DQ060055" &
,freq=TRUE, col='red', breaks=0.5+(0:6))
or
hist(y,freq=TRUE, col='red', breaks=0.25+(0:12)/2)
Hoping this helps!
Best wishes,
Ted.
On 22-Dec-2016 16:36:34 William Dunlap via R-help wrote:
> Looking at the return value of hist will show you what is happening:
>
>> x <- rep(1:6
X[r] <= y, which
would then be O(log2(n)).
Perhaps not altogether straightforward to program, but straqightforward
in concept!
Apologies for misunderstanding.
Ted.
On 05-Jun-2016 18:13:15 Bert Gunter wrote:
> Nope, Ted. I asked for a O(log(n)) solution, not an O(n) one.
>
> I will chec
s at Y[2]
Easy to make such a function!
Best wishes to all,
Ted.
On 05-Jun-2016 17:44:29 Neal H. Walfield wrote:
> On Sun, 05 Jun 2016 19:34:38 +0200,
> Bert Gunter wrote:
>> This help thread suggested a question to me:
>>
>> Is there a function in some package that efficientl
package installed
> seamlessly. It also loaded seamlessly.
>
> So I don't know why the computer gods are picking on you.
>
[***]
> Note that I am not working on a Mac, but rather running Linux (as do all
> civilized human beings! :-) )
Might this be yet another cand
Saludos José!
Could you please give a summary of the relevant parts of TPP
that might affect the use of R? I have looked up TPP on Wikipedia
without beginning to understand what it might imply for the use of R.
Best wishes,
Ted.
On 04-Feb-2016 14:43:29 José Bustos wrote:
> Hi everyone,
>
project.org
just as it always has been! So no change that *I* can perceive at the
R-help end.
Hoping this is useful,
Ted.
On 04-Feb-2016 16:33:29 S Ellison wrote:
> Apologies if I've missed a post, but have the default treatment of posts and
> reply-to changed on R-Help of late?
>
> I ask b
My feelings exactly! (And since quite some time ago).
Ted.
On 25-Jan-2016 12:23:16 Fowler, Mark wrote:
> I'm glad to see the issue of negative feedback addressed. I can especially
> relate to the 'cringe' feeling when reading some authoritarian backhand to a
> new user. We do see
gnored
(at least by R).
And then one has a variable which is a factor with 3 levels, all
of which can (as above) be meaningful), and "NA" would not be
ignored.
Hoping this helps to clarify! (And, Val, does the above somehow
correspond to your objectives).
Best wishes to a
p
Towards the bottom of this page is a section "Subscribing to R-help".
Follow the instructions in this section, and it should work!
Best wishes,
Ted.
-----
E-Mail: (Ted Harding) <ted.hard...@wlandres.net>
Date: 14-Oct-2015 Time: 19:
[R] Beta distribution approximate to Normal distribution
>
> Hi,
> I need to generate 1000 numbers from N(u, a^2), however I don't
> want to include 0 and negative values. How can I use beta distribution
> approximate to N(u, a^2) in R.
>
] 1.44
times the data used by R. So maybe there's a nasty lurking somewhere
in the spreadsheet? (Excel is notorious for planting things invisibly
in its spreadsheets which lead to messed-up results for no apparent
reasion ... ).
Hoping this helps,
Ted
intentions are not being over-ridden by operator
precedence rules.
Some people object to code clutter from parentheses that could
be more simply replaced (e.g. var -4 instead of var(-4)),
but parentheses ensure that it's right and also make it clear
when one reads it.
Best wishes to all,
Ted
Sorry, a typo in my reply below. See at ###.
On 12-Jan-2015 11:12:43 Ted Harding wrote:
On 12-Jan-2015 10:32:41 Erik B Svensson wrote:
Hello
I've got a problem I don't know how to solve. I have got a dataset that
contains age intervals (age groups) of people and the number of persons
the = is
satisfied by also.
Hoping this helps!
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 12-Jan-2015 Time: 11:12:39
This message was sent by XFMail
__
R-help@r-project.org mailing list
-17
# [5] -1.734723e-17 3.816392e-17 9.367507e-17 2.046974e-16
# [9] 4.267420e-16 -4.614364e-16 -1.349615e-15 -3.125972e-15
Hoping this helps!
Ted.
On 11-Jan-2015 08:29:26 Troels Ring wrote:
R version 3.1.1 (2014-07-10) -- Sock it to Me
Copyright (C) 2014 The R Foundation for Statistical
I should have added an extra line to the code below, to complete
the picture. Here it is (see below line ##.
Ted.
On 11-Jan-2015 08:48:06 Ted Harding wrote:
Troels, this is due to the usual tiny difference between numbers
as computed by R and the numbers that you think they are!
tt
- n1
## [1] 0 0 0 0 0 0 0
## But, of course:
1000*x0 - n1
## [1] 0.00e+00 0.00e+00 0.00e+00 0.00e+00
## [5] 0.00e+00 0.00e+00 -1.136868e-13
Or am I missing somthing else in what Mike Miller is seeking to do?
Ted.
On 01-Jan-2015 19:58:02 Mike Miller wrote:
I'd
... ).
So your data should look like:
V1 V2 V3 Survival Event
ann 13 WTHomo 41
ben 20 NA 51
tom 40 Variant 61
Hoping this helps,
Ted.
On 19
,
Ted.
On 19-Dec-2014 11:17:27 aoife doherty wrote:
Many thanks, I appreciate the response.
When I convert the missing values to NA and run the cox model as described
in previous post, the cox model seems to remove all of the rows with a
missing value (as the number of rows n in the cox output
it is displaying).
And of course many linux users install 'acroread' (Acrobat
Reader), though some object!
Hoping this helps,
Ted.
On 09-Dec-2014 20:47:06 Richard M. Heiberger wrote:
the last one is wrong. That is the one for which I don't know the
right answer on linux.
'xdvi' displays dvi
4.102431).
Ted.
On 30-Sep-2014 18:20:39 Duncan Murdoch wrote:
On 30/09/2014 2:11 PM, Andre wrote:
Hi Duncan,
No, that's correct. Actually, I have data set below;
Then it seems Excel is worse than I would have expected. I confirmed
R's value in two other pieces of software,
OpenOffice
109 NAfather
109 NAmother
That's the data. Now a little quiz question: Can you guess the
identity of the person with sample.ID = 01 ?
Best wishes to all,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 17-Aug
On 12-Aug-2014 22:22:13 Ted Harding wrote:
On 12-Aug-2014 21:41:52 Rolf Turner wrote:
On 13/08/14 07:57, Ron Michael wrote:
Hi,
I would need to get a clarification on a quite fundamental statistics
property, hope expeRts here would not mind if I post that here.
I leant that variance
)/(dim(Data_Normalized)[1]-1)
and compare the result with
cor(Data)
And why? Look at
?sd
and note that:
Details:
Like 'var' this uses denominator n - 1.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 12
was n, apparently not being aware
that R uses (n-1).
Just a few thoughts ...
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 12-Aug-2014 Time: 23:22:09
This message was sent by XFMail
__
R-help@r
be able to solve the equations for a and b.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 05-Aug-2014 Time: 11:46:52
This message was sent by XFMail
__
R-help@r-project.org
: Number of element to permute.
so, starting with
x - c(A,B,C,D,E)
library(e1071)
P - permutations(length(x))
then, for say the 27th of these 120 permutations of x,
x[P[27,]]
will return it.
Ted.
On 25-Jun-2014 20:38:45 Cade, Brian wrote:
It is called sample(,replace=F), where
-files-folders-show-hide.html
[NB: These are the results of a google search. I am no expert on
Windows myself ... ]
Hoping this helps,
Ted.
On 17-Jun-2014 12:48:54 Hiyoshi, Ayako wrote:
Dear Martyn and Professor Ripley,
Thank you so much for your help. I used Window's large file search
Maybe I am missing the point -- but what is wrong with line 3 of:
m=rbind(c(6,4,2),c(3,2,1))
v= c(3,2,1)
m%*%diag(1/v)
# [,1] [,2] [,3]
# [1,]222
# [2,]111
Ted.
On 14-May-2014 15:03:36 Frede Aakmann Tøgersen wrote:
Have a look at ?sweep
Br. Frede
0.000 0.081
(though in fact the times are somwhat variable in both cases,
so I'm not sure of the value of the relationship).
Best wishes,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 06-May-2014 Time: 19:41:13
This message
line for this calculation
with 'bc' (with result) is:
$ bc -l
[...]
168988580159 * 36662978
6195624596620653502
quit
which agrees with my horse-drawn working.
Best wishes to all,
Ted.
On Sun, May 4, 2014 at 10:44 PM, ARTENTOR Diego Tentor
diegotento...@gmail.com wrote:
Trying algorithm
.
Best wishes,
Ted.
On 04-May-2014 17:10:00 Gabor Grothendieck wrote:
Checking this with the bc R package (https://code.google.com/p/r-bc/),
the Ryacas package (CRAN), the gmp package (CRAN) and the Windows 8.1
calculator all four give the same result:
library(bc)
bc(168988580159
9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
and now just count down the 3rd column (as before).
Maybe this helps ...
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 07-Apr-2014 Time: 19:29:41
)
[...]
If, as he implies, the acc variable in data is a factor,
then lm() will not enjoy fitting an lm where the dependent
variables (response) is a factor!
Just a shot in the dark ...
Ted.
On 30-Mar-2014 18:46:27 Bert Gunter wrote:
1. Post in plain text, not HTML.
2. Read ?lm and note the data
are in no particular order as a whole; and in some datasets the
different separate parts of the boundary do not exactly match up
at the points where they should exactly join.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 23
)}
}
rounddown - function(x){
if((x-floor(x))==1/2){
floor(x)
} else {round(x)}
}
Also have a look at the help page
?round
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 20-Mar-2014 Time: 20:04:20
)
Benno Pütz
Another formulation, which breaks it into steps and may therefore
be easier to adopt to similar but different cases, is
aa0-gsub(^[0-9]+ ,,aa)
aa0
# [1] (472) (445) (431) (431) (415) (405) (1)
as.numeric(gsub([()],,aa0))
# [1] 472 445 431 431 415 405 1
Ted
to others ...
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 20-Feb-2014 Time: 12:00:07
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo
punion - function(p){1 - prod(1-p)}
should do it!
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 18-Feb-2014 Time: 23:51:31
This message was sent by XFMail
__
R-help@r-project.org mailing list
)
x0 - 65.44945
plot(x+x0, dgamma(x, shape=2, scale=5.390275),
main=Gamma,type='l')
will produce such a plot. However, I wonder if you have correctly
expressed the problem!
Ted.
This generates a distribution with origin equal zero, but I want the
origin to be x0
How can I handle
interesting is sitting in my disk, I can edit it if
I wish, I can make local copies, etc. etc. etc. etc. Anything which is
not interesting gets deleted (though I can always dig into R-help
archives if need be).
Best wishes,
Ted.
On 03-Feb-2014 21:36:21 Rolf Turner wrote:
For what it's worth, I
/posting
to R-help is
r-help-ow...@r-project.org
Ted.
thx
[[alternative HTML version deleted]]
Don't post in html, please.
Rui Barradas
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
position (down to the line number),
and various measures of memory use.
Duncan Murdoch
Also have a look at
?trace
which you may be able to use for what you want.
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 03-Jan-2014 Time: 00:14
its head!
All it has to do is multiply by 2 -- and it gets it cumulatively wrong!
R just doesn't add up ...
Season's Greetings to all -- and may your calculations always
be accurate -- to within machine precision ...
Ted.
-
E-Mail: (Ted Harding
there)!
But, before anyone takes my posting *too* seriously, let me say that
it was written tongue-in-cheek (or whatever the keyboard analogue of
that may be). I'm certainly not blaming R.
Have fun anyway!
Ted.
On 22-Dec-2013 17:35:56 Bert Gunter wrote:
Yes.
See also Feigenbaum's constant and chaos theory
: +X000U.UU000U0UU0UUU000U00U000UU0UUP+
U: +X0001.1100010110111000100100011011P+
The final result probably needs tidying up in accordance with
the needs of subsequent uses!
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding
,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 14-Dec-2013 Time: 10:54:00
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 17-Nov-2013 Time: 23:55:48
,]133
# [7,]222
# [9,]233
#[10,]333
There may be a simpler way!
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 07-Nov-2013 Time: 17:04:50
This message was sent by XFMail
instead of 1.53?
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 16-Oct-2013 Time: 16:12:56
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo
copies of R executing simulations, and your
original R command-line would be available.
Just a suggestion (which may have missed the essential point of
your query, but worth a try ... ).
I have no idea how to achieve a similar effect in Windows ...
Ted
consolation to R:
all.equal(0,sin(pi))
# [1] TRUE
So it depends on what you mean by different from. Computers
have their own fuzzy concept of this ... Babak has too fussy
a concept.
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 26
lurking somewhere in the depths of this
function which can be set so that the scales for all the variables
X1,X2,X3,X4,X5 appear both above and below columns 1,2,3,4,5;
and both to the left and to the right of rows 1,2,3,4,5?
With thanks,
Ted.
-
E
On 21-Aug-2013 19:08:29 David Winsemius wrote:
On Aug 21, 2013, at 10:30 AM, (Ted Harding) wrote:
Greetings all.
I suspect this question has already been asked. Apologies
for not having taced it ...
In the default pairs plot produces by the function pairs(),
the coordinate scales
of x and then apply
it to whatever multiple of the unit you happen to be interested in
as a change (along with the reasons for that interest).
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 12-Jun-2013 Time: 17:14:00
This message was sent
message to R-help,
one of the moderators will approve it (though quite possible not
immediately).
Hoping this helps,
Ted (one of the moderators)
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 01-Jun-2013 Time: 20:11:00
This message was sent
] [,4] [,5]
# [1,] 22 298 151
# [2,] 30 16 2329
# [3,] 10 31 243 17
# [4,] 114 25 32 18
# [5,] 265 12 33 19
# [6,] 27 34 20 136
# [7,] 35 28 147 21
which looks right!
Ted
by a method which was prompted by looking at the data
in the first place.
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 09-May-2013 Time: 09:35:05
This message was sent by XFMail
:
V1 = c(A1,A2,A3,...)
V2 = c(B1,B2,B3,...)
Suggestions?
With thanks,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 25-Apr-2013 Time: 11:16:46
This message was sent by XFMail
__
R-help@r
Thanks, Jorge, that seems to work beautifully!
(Now to try to understand why ... but that's for later).
Ted.
On 25-Apr-2013 10:21:29 Jorge I Velez wrote:
Dear Dr. Harding,
Try
sapply(L, [, 1)
sapply(L, [, 2)
HTH,
Jorge.-
On Thu, Apr 25, 2013 at 8:16 PM, Ted Harding ted.hard
...
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 18-Apr-2013 Time: 10:06:43
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman
1545 2085 2255 2465
Does this help to explain it?
Ted.
Please help me to understand all this!
Thanks,
-Sergio.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R
to either a row or column matrix
to make the two arguments conformable. If both are vectors it
will return the inner product (as a matrix).
Usage:
x %*% y
[etc.]
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 11-Apr
*100 == 29
# [1] FALSE
we have:
round(0.29*100) == 29
# [1] TRUE
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 09-Apr-2013 Time: 17:56:33
This message was sent by XFMail
empirically).
With thanks,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 01-Apr-2013 Time: 21:37:17
This message was sent by XFMail
__
R-help@r-project.org mailing list
https://stat.ethz.ch
On 01-Apr-2013 21:26:07 Robert Baer wrote:
On 4/1/2013 4:08 PM, Peter Ehlers wrote:
On 2013-04-01 13:37, Ted Harding wrote:
Greetings All.
This is a somewhat generic query (I'm really asking on behalf
of a friend who uses R on Windows, whereas I'm on Linux, but
the same phenomenon appears
percent confidence interval:
# 0.4325543 0.5591068
# sample estimates:
# p
# 0.4957627
So it doesn't do the requested continuity correction in [A] because
there is no need to. But in [B1] it makes a difference (compare
with [B2]), so it does it.
Hoping this helps,
Ted
function which could offer similar viewing capability without
the risk of data change?
With thanks,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 26-Mar-2013 Time: 10:08:58
This message was sent by XFMail
Sorry, I meant data.entry(), not edit.data() (the latter due
to mental cross-wiring with edit.data.frame()).
I think that Nello Blaser's suggestion of View may be what I
seek (when I can persuade it to find the font it seeks ... )!
With thanks, Barry.
Ted.
On 26-Mar-2013 10:20:59 Barry
Thanks! ?View does indeed state The object is then viewed
in a spreadsheet-like data viewer, a read-only version of
'data.entry', which is what I was looking for!
Ted.
On 26-Mar-2013 10:23:59 Blaser Nello wrote:
Try ?View()
-Original Message-
From: r-help-boun...@r-project.org
normal distribution with 10 dimensions, and, for a single vector
(X1,...,X10) drawn from this distribution, (X(1), ..., X(10))
is a vector consisting of these same values (X1,...,X10), but
in increasing order.
Is that what you mean?
Hoping this helps,
Ted
at least a second moment, hence excluding, for
example, the Cauchy distribution).
That's about as far as one can go with your question!
Hoping it helps, howevr.
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 03-Mar-2013 Time: 17:12:50
^0.5
# [1] 1.490116e-08
.Machine$double.eps
# [1] 2.220446e-16
(0.1 + 0.05) - 0.15
# [1] 2.775558e-17
Hoping this helps,
Ted.
-
E-Mail: (Ted Harding) ted.hard...@wlandres.net
Date: 30-Jan-2013 Time: 23:22:53
This message was sent by XFMail
1 - 100 of 1077 matches
Mail list logo