les wrote:
>
>
> > On Jun 6, 2019, at 2:04 PM, Richard O'Keefe wrote:
> >
> > How can expanding tildes anywhere but the beginning of a file name NOT be
> > considered a bug?
> >
> >
>
> I think that that IS what libreadline is doing if one allows a whitespace
You can find the names of the columns of a dataframe using
colnames(my.df)
A dataframe is a value just as much as a number is, and as such,
doesn't _have_ a name. However, when you call a function in R,
the arguments are not evaluated, and their forms can be recovered,
just as "plot" does. In
PS: lm records a copy of the call in its result, but has no other use
for any name the data frame may have had.
On Sun, 2 Jun 2019 at 14:45, Richard O'Keefe wrote:
> You can find the names of the columns of a dataframe using
> colnames(my.df)
> A dataframe is a value just as much as
How can expanding tildes anywhere but the beginning of a file name NOT be
considered a bug?
On Thu, 6 Jun 2019 at 23:04, Ivan Krylov wrote:
> On Wed, 5 Jun 2019 18:07:15 +0200
> Frank Schwidom wrote:
>
> > +> path.expand("a ~ b")
> > [1] "a /home/user b"
>
> > How can I switch off any file
Nobody else has asked the obvious question: why are the data squashed
together
like that in the first place? why not modify the process that generates the
data so that it does not do that? Jamming things together like that is not
common practice with CSV files, so what does the CSV file look
You have: a square logical matrix M representing a binary relation.
You want: a similar matrix representing the least (fewest true cases)
transitive relation extending what M represents.
It sound as though you are looking for the TRANSITIVE CLOSURE.
You will find that in the 'relations' package.
This has nothing to do with your problem, but given the heavy use of "="
to bind keyword parameters in R, I find the use of "=" for assignment as
well confusing. It makes code harder to read than it needs to be.
The historic " <- " assignment makes the distinction obvious.
On Wed, 26 Jun 2019
How about just
df$time[match(paste(df$a, df$b, df$c), c(
"co mb o1",
..
"co mb oN"))]
On Fri, 14 Jun 2019 at 08:22, Tina Chatterjee
wrote:
> Hello everyone!
> I have the following dataframe(df).
>
> a<-c("a1","a2","a2","a1","a1","a1")
> b<-c("b1","b1","b1","b1","b1","b2")
>
You did not say what your doubt about R was.
PL2.rasch has some class.
> class(PL2.rasch)
[1] 'Grofnigtz' # or whatever
The summary function is really just a dispatcher.
> summary.Grofnigtz
... a listing comes out here ...
Or you could look in the source code of whatever package you are usin.
ination values directly, I just
> want the simple formulas to calculate item difficulty and item
> discrimination.
>
> Also how they have calculated theta(ability) and scores at the backend of
> the code.
>
>
>
>
>
> Sent from Mail <https://go.microsoft.com/fwlink/?LinkId=5
I have read your message four times in the last couple of days,
but I still have very little idea what you want.
Let's try some things I've gleaned.
You have a matrix with 9 rows and 2 columns, and there is a 2
somewhere in column 1.
> m <- matrix(1:18, nrow=9, ncol=2)
> m[c(4,7,8),1] <- 2
> m
Why not set all the new columns to dummy values to get the order you
want and then set them to their final values in the order that works
for that?
On Thu, 4 Jul 2019 at 00:12, Kevin Thorpe wrote:
>
> > On Jul 3, 2019, at 3:15 AM, Sebastien Bihorel <
> sebastien.biho...@cognigencorp.com>
Expectation: ifelse will use the same "repeat vectors to match the longest"
rule that other vectorised functions do. So
a <- 1:5
b <- c(2,3)
ifelse(a < 3, 1, b)
=> ifelse(T T F F F <<5>>, 1 <<1>>, 2 3 <<2>>)
=> ifelse(T T F F F <<5>>, 1 1 1 1 1 <<5>>, 2 3 2 3 2 <<5>>)
=> 1 1 2 3 2
and that is
The answer here is that in "ifelse(a < 3, ..)" you ALWAYS expect "a" to
be
a vector because there would be no point in using ifelse if it weren't.
If you believe that "a" is or ought to be a single number, you write
x <- if (a < 3) 1 else 2
The whole point of ifelse is to vectorise.
On
to be a quality-of-implementation bug.
On Thu, 11 Jul 2019 at 04:14, Dénes Tóth wrote:
>
>
> On 7/10/19 5:54 PM, Richard O'Keefe wrote:
> > Expectation: ifelse will use the same "repeat vectors to match the
> longest"
> > rule that other vectorised f
The obvious question is "what do you mean, FORMATTED AS a matrix?"
Once you have read an object into R, you have no information about how it
was formatted.
Another question is "what do you mean, MATRIX"?
Do you mean the kind of R object specifically recognised by is.matrix,
or do you mean
Does this help?
https://www.r-bloggers.com/computing-sample-size-for-variance-estimation/
On Wed, 3 Jul 2019 at 10:23, Thomas Subia via R-help
wrote:
> Colleagues,
> Can anyone suggest a package or code which might help me calculate the
> minimum sample size required to estimate the population
(1) m[,1] is the first column of matrix (or dataframe) m.
(2) The first row of matrix or dataframe m is m[1,]
(3) To remove the first row of matrix or dataframe m,
do m <- m[-1,]
On Wed, 3 Jul 2019 at 08:59, Nicola Cecchino wrote:
> Hello,
>
> I am simply trying to remove the [,1] row from
ix];
r[!ix]<-y[!ix]; r})
>user system elapsed
> 0.082 0.053 0.135
>
> -pd
>
>
> > On 12 Jul 2019, at 15:02 , Richard O'Keefe wrote:
> >
> > "ifelse is very slow"? Benchmark time.
> >> x <- runif(100)
> >> y <- runif(
roblem.
>
> [1] https://cran.r-project.org/bin/linux/ubuntu/README.html
>
> On July 14, 2019 4:55:25 PM CDT, Richard O'Keefe wrote:
> >Four-core AMD E2-7110 running Ubuntu 18.04 LTS.
> >The R version is the latest in the repository:
> >r-base/bionic,bionic,now
Are you aware of https://www.tidytextmining.com/
On Mon, 1 Jul 2019 at 16:57, Mehdi Dadkhah wrote:
> Thank you!!
> Have a nice day!
> With best regards,
>
> On Mon, Jul 1, 2019 at 6:57 AM Abby Spurdle wrote:
>
> >
> > > In parts of these reports, people may state their
> > > reasons for do not
> I am looking to understand why the keyword function can take a logical
argument
> (eg: x<4) and use that later inside the function's definition for logical
evaluations
The "function" keyword does not take a logical argument.
Let me show you some parallels:
f <- function (x, y) {x+y} #
Here's a tip for the original poster.
> ?numeric
and then follow the link it suggests
> ?double
which says amongst other things
All R platforms are required to work with values conforming to the
IEC 60559 (also known as IEEE 754) standard. This basically works
with a precision of
A little note on quoting in regular expressions.
I find writing \\. when I want a quoted . somewhat confusing,
so I would use the pattern "_w_.*[.]csv$".
Better still, if you want to match file names,
there is a function glob2rx that converts shell ("glob")
patterns into regular expression
One of my grandfathers was from Croatia. Guess what the word "slave" is
derived
from? That's right, Slavs. This goes back to the 9th century. And then
of course
my grandfather's people were enslaved by the Ottoman empire, which was only
defeated
a little over a hundred years ago. My other
Splitting a data set into a part to be used for fitting a model
(the training set) and a part to be used for evaluating the
quality of the model on new cases (the test set) has been good
practice for a long time. If the architecture of the model is
to be learned as well as parameters, a three-way
2(N-1)/N = 2 - 2/N.
So one way to get exactly that mean is to make all the numbers
2 except for two of them which are 1.
N < 2 : can't be done.
N = 2 : only [1,1] does the job.
N = 3 : the sum of the three numbers must be 4, so none of them
can be 3, so [1,1,2] [1,2,1] [2,1,1] are the
I don't know why I thought you wanted a *random* sequence..
The 'rep' function can do more than you realise.
generate_k <- function (N, n3) rep(1:3, c(n3+2, N-2-n3*2, n3))
On Thu, 1 Aug 2019 at 22:48, Lorenzo Isella
wrote:
> Yes, you are right (and yours is one of the possible cases).
> I
No, read.table() *isn't* about printing.
If you have data stored as text in a tabular format,
you use read.data, read.csv, read.csv2, read.delim, or read.delim2
to read it. This returns a new data frame. Example:
> x <- read.table(textConnection(c(
+ "A B",
+ "1 2",
+ "3 4")), TRUE)
> x
A B
1
If "Fardadj was expecting R to recognise the comma as the decimal"
then it might be worth mentioning the 'dec = "."' argument of
read.table and its friends.
On Sun, 21 Jul 2019 at 12:48, Jeff Newmiller
wrote:
> It is possible that part of the original problem was that Fardadj was
> expecting R
I'm having a little trouble believing what I'm seeing.
To the best of my knowledge,
sample.info <- data.frame( + spl=paste('A', 1:8, sep=''), +
stat=rep(c('cancer' , 'healthy'), each=4))
is not legal R syntax, and R 3.6.1 agrees with me
Error: unexpected '=' in "x <- data.frame(+ spl="
Then I see
If calling a separate chunk of GPL code forced your code to be GPL,
nobody would be able to write proprietary code on Linux. For example,
Intel would not be able to sell their C/C++ compiler without offering
source code, which they don't.
When you are coupling two programs in this way, one
Ah, *now* we're getting somewhere. There is something that *can* be
done that's genuinely helpful.
>From the R(1) manual page:
-q, --quiet
Don't print startup message
--silent
Same as --quiet
--slave
Make R run as quietly as
human being forced to
> perform work under duress and considered nothing more than a machine, say a
> dishwasher or a tractor. And in some regions, this echoes on and is
> offensive and hurtful to some.
>
> A new user, wanting to reduce output from R, would probably reach for “-q”
> or “—qui
Actually, R's scope rules are seriously weird.
I set out to write an R compiler, wow, >20 years ago.
Figured out how to handle optional and keyword parameters efficiently,
figured out a lot of other things, but choked on the scope rules.
Consider
> x <- 1
> f <- function () {
+ a <- x
+ x <-
t doesn't work with "with", and then of course there are "active
bindings" nowadays,
see ?bindenv.
On Fri, 27 Sep 2019 at 02:59, Duncan Murdoch wrote:
>
> On 26/09/2019 9:44 a.m., Richard O'Keefe wrote:
> > Actually, R's scope rules are seriously weird.
> > I se
Not being a jerk is a good thing.
Unthinking political correctness is not the same thing at all.
The point has already been made that the relationship between
a "master" process or cylinder and a "slave" one is intrinsically
a dominance relation where the "master" tells the "slave" what to
do. No
Is this what you're after?
> df <- data.frame(
+ Date = as.Date(c("2018-03-29", "2018-03-29", "2018-03-29",
+ "2018-03-30", "2018-03-30", "2018- ..." ...
[TRUNCATED]
> df$count <- cumsum(c(TRUE, diff(df$Date) > 0))
> df
Date count
1 2018-03-29 1
2
Since this has already been answered, I'll just mention one point that was
not addressed.
> d=c(1,2,3,"-","dnr","post",10)
This is rather odd.
> str(d)
chr [1:7] "1" "2" "3" "-" "dnr" "post" "10"
You can create a vector of logical values, or a vector of numbers, or a
vector of strings,
but if
This looks vaguely like something from exercism.
Let's approach it logically.
xa xb xc ya yb zc
We see two patterns here:
A: x x x y y z
B: a b c a b c
If only we had these two character vectors, we could use
paste(A, B, sep = "")
to get the desired result. So now we have reduced the
problem
This particular task is not a problem about R.
It is a problem n combinatorics.
Start with the obvious brute force algorithm
(1) Let S be the union of all the sets
(2) For each K in 0 .. |S|
(3) Enumerate all |S| choose K subsets C of S
(4) If C satisfies the condition, report it and stop
As a C implementation of merge sort, that is the very reverse of impressive.
I would not translate *that* code into anything.
There is a fundamental difference between between arrays in C and arrays in R,
and it is the same as the difference between Python and R.
You are MUCH better to start from
The obvious question is "why?"
If you just want to sort stuff, ?sort and ?order tell you about the
sorting methods available in R.
If you want to translate this specific algorithm into R for some reason,
(a) if you don't know enough about array processing in R to do this yourself,
how are you
Pearson's original paper uses both \chi and \chi^2 frequently but
never spells out how to pronounce the latter.
Try another question: when talking about \sigma^2 do you say
"sigma-square" (which sounds rather odd) or "sigma-squared" (which
sounds more natural)? If you say sigma-square, say
Can we do this very simply?
My understanding is that you have a column where all the elements are
zero except for perhaps a single one.
Consider an example 0 0 1 0 0 where you want -2 -1 0 1 2. This is 1 2
3 4 5 - 3.
> v <- c(0,0,1,0,0)
> w <- which(v == 1)
> a <- seq(along=v) - if (length(w) ==
I think the problem may lie in your understanding of what "==" does with NA
and/or what "[]" does with NA.
> x <- c(NA, "Yes")
> x == "Yes"
[1] NA TRUE
Since you say you DON'T want the rows with "Yes", you just want
x[is.na(x)]
or in your case
t11 <-
https://www.linkedin.com/pulse/r-shiny-application-supply-chain-network-design-shrinidhee-shevade
On Thu, 20 Feb 2020 at 16:15, Jeff Reichman wrote:
>
> R-Help Forum
>
>
>
> Anyone ever perform supply chain optimization, operations optimization or
> sales optimization in R? If so what packages
Others have already commented on the "no homework" policy.
I'd like to make a different point.
When I was doing my MSc many years ago, a friend of mine was
really struggling with statistics. He complained to me that when
studying the textbooks and looking at examples, he could never
figure out
In your example, these keys occur
1 countries
1 maximum_im
1 minimum_im
61 name
66 pk -- many of these pertain to countries, some do not
1 taxonomy_gem
It is not clear what you mean by 'the name of the countries'.
There appear to be 61 countries each with two
> sqrt(c(1,-2,3))
[1] 1.00 NaN 1.732051
Warning message:
In sqrt(c(1, -2, 3)) : NaNs produced
You might want to put
if (any(VS < 0)) stop("some VS are negative")
just after the definition of VS.
On Tue, 21 Jan 2020 at 02:01, Atul Saini wrote:
>
> Hello R,
> I am attaching
I think you had better start by defining what you mean by "similar".
Examples are good, but not enough.
On Fri, 27 Dec 2019 at 06:54, Thomas Subia wrote:
>
> Colleagues,
>
> I have two locations where my data resides.
> One folder is for data taken under treatment A
> One folder is for data
The specific problem you are trying to solve is so constrained that
you do not need a
general purpose method.
You start with a string that contains characters drawn from a *subset*
of ASCII with
at most 64 elements. Accordingly, all you need is a table mapping characters to
6-character strings.
I'm very confused by the phrase "string alienation".
You mention two problems:
(1) remove " from a string
sub('"', '', vector.of.strings)
will do that. See
?grep
for details.
(2) split a string at occurrences of /
strsplit(vector.of.strings, "/")
will do that.
You might want to start by re-expressing the "amounts"
variables to
total amount * (relative fish 1, relative fish 2, invertebrates)
and then using the Isometric Log-ratio transformation to
convert the compositional part to orthonormal coordinates.
You now say that you "want to get rid of" the rows
where V13 is 0.
d1 <- d[d$V13 != 0,]
returns you a new data frame d1 containing all the
rows of d where V13 is not 0.
On Tue, 21 Apr 2020 at 15:53, Helen Sawaya wrote:
> Thank you all for your input.
>
> This is an example of one data file (I
Your ifelse expression looks fine. What goes wrong with it?
On Tue, 5 May 2020 at 05:16, Ana Marija wrote:
>
> Hello,
>
> I have a data frame like this:
>
> > head(b)
>FID IID FLASER PLASER
> 1: fam1000 G1000 1 1
> 2: fam1001 G1001 1 1
> 3: fam1003 G1003 1
I cannot tell from your message whether
(a) your project is a statistical consulting one and you want to
use R to do the calculations and graphics for it, but will not
give your client R itself,or
(b) your project will deliver software to the client, and that
software will include a
t. Given scale of the coordinate numbers, would that be a national
> grid system employed in New Zealnd?
>
> J. W. Dougherty
>
> On Mon, 11 May 2020 13:56:49 +1200
> "Richard O'Keefe" wrote:
>
> > Hey, I know that volcano! It's walking distance from the Interm
A closure is a function plus an environment.
That's it. This is a sixty-year-old thing in programming languages.
A closure is a dynamic value representing an instance of a function in
a particular context which it can refer to. When a program in Algol
60 passed a procedure P to a procedure Q,
That example is NOT an example of "messing around with environments."
On Thu, 7 May 2020 at 15:36, Mark Leeds wrote:
>
> Hi Abby: I agree with you because below is a perfect example of where not
> understanding environments causes a somewhat
> mysterious problem. Chuck Berry explains it in a
By "mess around with" I mean environment(f) <- ...
That is for _very_ advanced players.
Never assume that someone meant something stupid, make them prove it.
On Thu, 7 May 2020 at 15:28, Abby Spurdle wrote:
>
> > If you want to mess around with the environment of a
> > function, then you need
and it was not my intention to talk about the various
>> solutions. Only that it can be helpful if one understands the notion of
>> environments in R.
>>
>>
>>
>>
>>
>>
>>
>>
>> On Thu, May 7, 2020 at 12:42 AM Richard O'Keefe wrote:
>
Hey, I know that volcano! It's walking distance from the Intermediate
school I attended.
To you it's a plot; to me it's a place.
So I offer you four scenarios.
1. You think of it as a place you know and have been.
In that case the "right" orientation is the one that best matches
what you are
What do you mean "ANSI"?
Do you mean ASCII? In that case there is nothing to be done.
Do you mean some member of the ISO 8859 family of 8-bit character sets?
Do you mean some Microsoft-specific code page, such as CP-1252?
(Microsoft CP-437 and CP-1252 "ANSI" but if they have any connection
What do you *mean* "when you want to use the kernels".
WHICH kernels?
Use to do WHAT?
In your browser, visit cran.r-project.org
then select "Packages" from the list on the left.
Then pick the alphabetic list.
Now search for 'kernel'.
You will find dozens of matches.
On Wed, 14 Oct 2020 at 05:15,
I do not understand your question.
Are you talking about "functional data analysis", the statistical
analysis of data where some of the covariates are (samples from)
continuous functions? There are books and tutorials about doing
that in R.
Are you talking about "functional data structures", as
There are & and | operators in the R language.
There is an | operator in regular expressions.
There is NOT any & operator in regular expressions.
grep("ConfoMap", mydata, value=TRUE)
looks for elements of mydata containing the literal
string 'ConfoMap'.
> foo <- c("a","b","cab","back")
>
In fairness to Raija Hallam, I've met masters and doctoral students whose
supervisors hadn't a clue. (Heck, I once worked at a University where the
staff evaluation process involved taking the means of 5-point ordinal
variables...) Two of these cases stick in my mind: one where the student
had
(1) Using 'C == TRUE' (when you know C is logical)
is equivalent to just plain C, only obscure.
Similarly, 'C == FALSE' is more confusing than !C.
(2) Consider B[C]. The rows of C have 2, 1, 1, 2, 1 TRUE.
entries, so the result here *cannot* be a rectangular array.
And whatever
the real pleasure comes from things you weren't looking for but recognise
as just what you needed.
On Sat, 23 May 2020 at 12:34, Jeff Newmiller
wrote:
> You are bound to be disappointed if you invert the purpose of the list.
> This is marketing... think of it as a sale... stores rarely put
This can be done very simply because vectors in R can have
named elements, and can be indexed by strings.
> stage <- c("1" = 1, "1a" = 1.3, "1b" = 1.5, "1c" = 1.7,
+"2" = 2, "2a" = 2.3, "2b" = 2.5, "2c" = 2.7,
+"3" = 3, "3a" = 3.3, "3b" = 3.5, "3c" = 3.7)
> testdata <-
that
it uses a fundamental operation of the language very directly.
Anyone using R would do well to *master* what indexing can do
for you.
On Sat, 11 Jul 2020 at 17:16, Eric Berger wrote:
> xn <- as.numeric(sub("c",".7",sub("b",".5",sub(&qu
Just as a matter of curiosity, what are some of the programs
that have already been vetted, what methods were used, and
how long did the vetting take?
As the R guidance points out, R was not designed for
creating or updating medical records, so it should be
treated the same way as say LibreOffice
tave?
>
> Bert Gunter
>
> "The trouble with having an open mind is that people keep coming along and
> sticking things into it."
> -- Opus (aka Berkeley Breathed in his "Bloom County" comic strip )
>
>
> On Thu, Jun 18, 2020 at 6:31 PM Richard O'Keefe wrot
I would add to this that in an important data set I was working with,
most of the dates were dd/mm/yy but some of them were mm/dd/yy and
that led to the realisation that I couldn't *tell* for about 40% of
the dates which they were. If they were all one or the other, no
worries, but when you have
> load("WVS.RData", verbose=TRUE)
Loading objects:
final.ord
> str(final.ord)
num [1:82992, 1:74] 2 2 1 2 2 1 2 1 2 2 ...
- attr(*, "dimnames")=List of 2
..$ : NULL
..$ : chr [1:74] "v12" "v13" "v14" "v15" ...
When using load() with unfamiliar data,
verbose=TRUE is your friend.
Note that
The first response has to be "how did the spaces get there
in the first place?" Can you fix the process that creates
the data? If the process sometimes generates one extra
space, are you sure it never generates two?
But let's treat this purely as a regular expression
problem, where if there is
The spaces may not have been VISIBLE in the Word document,
but that does not mean that there wasn't anything THERE.
- What happens if you open the document in Word and
save it as plain text?
- What happens if you open the document in Word and
save it as RTF, then read that using read_rtf?
-
To be honest, I would do this one of two ways.
(1) Use ?decimate from library(signal),
decimating by a factor of three.
(2) Convert the variable to an (n/3)*3 matrix using
as.matrix then use rowMeans or apply.
On Thu, 3 Dec 2020 at 06:55, Stefano Sofia
wrote:
> Dear list users,
> I
More accurately, in x[i] where x and i are simple vectors,
i may be a mix of positive integers and zeros
where the zeros contribute nothing to the result
or it may be a MIX of negative integers and zeros
where the zeros contribute nothing to the result
and -k means "do not include element
I'm running Ubuntu 18.04 LTS and
r-base/bionic-cran35,now 3.6.3-1bionic all [installed]
3.6.3 is also the latest version in the repository.
On Fri, 30 Oct 2020 at 12:21, Marc Schwartz via R-help
wrote:
>
> > On Oct 29, 2020, at 6:35 PM, H wrote:
> >
> > On 10/29/2020 01:49 PM, Marc Schwartz
The source being a URL is not important.
The important things are
- what the structure of the JSON data is
- what the MEANING of the JSON data is
- what that meaning says about what SHOULD appear in the data
from in these cases.
Arguably this isn't even an R question at all. It's a question
"Body Mass Index" is a rather bizarre thing:
body.mass.in.kg / height.in.m^2
I have never been able to find any biological
or physical meaning for this. Yet clinicians
are solemnly advised to measure the weight to
the nearest 0.1kg and the height to the
nearest 0.1cm.
How do you propose to
I wrote:
> > By the time you get the data from the USGS, you are already far past the
> > point
> > where what the instruments can write is important.
Rich Shepard replied:
> The data are important because they show what's happened in that period of
> record. Don't physicians take a medical
It is a general "feature" of TeX that documents with tables of
contents, indices,
bibliographies, and so on, have to be "iterated to convergence". A couple of
PhD theses came out of Stanford; the problem is in that which page one thing
goes on depends on where other things went, which depends on
Why would you need a package for this?
> samples.per.day <- 12*24
That's 12 5-minute intervals per hour and 24 hours per day.
Generate some fake data.
> x <- rnorm(samples.per.day * 365)
> length(x)
[1] 105120
Reshape the fake data into a matrix where each row represents one
24-hour period.
>
the point that you probably should not
be doing any of this.
On Tue, 31 Aug 2021 at 00:42, Rich Shepard wrote:
>
> On Mon, 30 Aug 2021, Richard O'Keefe wrote:
>
> > Why would you need a package for this?
> >> samples.per.day <- 12*24
> >
> > That's 12 5-minute
y to
analyse river flow data *on its own*.
On Mon, 30 Aug 2021 at 14:47, Jeff Newmiller wrote:
>
> IMO assuming periodicity is a bad practice for this. Missing timestamps
> happen too, and there is no reason to build a broken analysis process.
>
> On August 29, 2021 7:09:0
, 31 Aug 2021 at 11:34, Rich Shepard wrote:
>
> On Tue, 31 Aug 2021, Richard O'Keefe wrote:
>
> > I made up fake data in order to avoid showing untested code. It's not part
> > of the process I was recommending. I expect data recorded every N minutes
> > to use NA when
Your question is ambiguous.
One reading is
n <- length(table$Data)
m <- n %/% 3
s <- sample(1:n, n)
X <- table$Data[s[1:m]]
Y <- table$Data[s[(m+1):(2*m)]]
Z <- table$Data[s[(m*2+1):(3*m)]]
On Fri, 3 Sept 2021 at 13:31, AbouEl-Makarim Aboueissa
wrote:
>
> Dear All:
>
> How to
Let's simplify this to consider a single vector, such as
x <- c(1,1,1,2,2,3,3,3,3,4,5,5,5)
in which equal elements are in contiguous blocks.
> diff(x)
[1] 0 0 1 0 1 0 0 0 1 1 0 0
Of course, there could be gaps, or the sequence might be descending
instead of ascending. So
> diff(x) != 0
We are
According to Wikipedia, this is the definition of Whipple's index:
"The index score is obtained by summing the number of persons in the
age range 23 and 62 inclusive, who report ages ending in 0 and 5,
dividing that sum by the total population between ages 23 and 62 years
inclusive, and
If you want to look at each digit, you should take a step back and
think about what the
Whipple index is actually doing. Basically, the model underlying the
Whipple index is
that Pr(age = xy) = Pr(age = x*)Pr(age = *y) if there is no age
heaping. Or rather,
since the age is restricted to 23..62
> x <- c(1,2,3) # a vector of numbers, such as snowfallsum
> (cx <- cumsum(x)) # a vector of cumulative sums.
1 3 6
> i <- 1 # The starting point.
> j <- 2 # The ending point.
> cx[j] - cx[i-1] # sum of x[i] + ... + x[j]
ERROR!
> cx <- c(0, cx) # Oops, we need this step.
> cx[j+1] - cx[i]
So
Colour me confused.
if (...) { ... } else { ... }
is a control structure. It requires the test to evaluate to a single
logical value,
then it evaluates one choice completely and the other not at all.
It is special syntax.
ifelse(..., ..., ...) is not a control structure. It is not special
Do you really want the minimum?
It sounds as though your model is a*N(x1,s1) + (1-a)*N(x2,s2) where
you use mixtools to estimate
the parameters. Finding the derivative of that is fairly
straightforward calculus, and solving for the
derivative being zero gives you extrema (you want the one between
method more than
> function analysis...
>
> On Thu, Oct 14, 2021 at 9:06 AM Richard O'Keefe wrote:
> >
> > Do you really want the minimum?
> > It sounds as though your model is a*N(x1,s1) + (1-a)*N(x2,s2) where
> > you use mixtools to estimate
> > the paramete
It *sounds* as though you are trying to impute missing data.
There are better approaches than just plugging in means.
You might want to look into CALIBERrfimpute or missForest.
On Tue, 19 Oct 2021 at 01:39, Admire Tarisirayi Chirume
wrote:
>
> Good day colleagues. Below is a csv file attached
On a $150 second-hand laptop with 0.9GB of library,
and a single-user installation of R so only one place to look
LIBRARY=$HOME/R/x86_64-pc-linux-gnu-library/4.0
cd $LIBRARY
echo "kbytes package"
du -sk * | sort -k1n
took 150 msec to report the disc space needed for every package.
That'
On Sun,
You want to read this:
http://adv-r.had.co.nz/Exceptions-Debugging.html
It describes all the ways that R can report a problem
and all the ways you can catch such a report while still in R.
Let me heartily recommend the whole site, or better yet, the book
1 - 100 of 185 matches
Mail list logo