It's not clear whether your numbers are tab or space-separated, I will
assume space-separated. My lowtech (and not R) solution would be to
dump the output into a text file (call it data.in), then run a sed
command to first replace two initial spaces from each line, then
replace initial spaces with
What would be the sophisticated R method for reading the data shown below
into a list? The data is output from a numerical model. Pasting the
second block of example R commands (at the end of the message) results in a
failure ("Error in scan...line 2 did not have 6 elements"). I no doubt
could
Given these two kinds of data set, and data set2 was obtained through the
weighted Euclidean formula. Can we estimate the weight parameter for each
variable in R based on the steepest descent method ? Thanks very much.
data set 1:
9 164 78 0 0 32.8 0.148 45
4 134 72 0 0 23.8 0.277 60
5 166 72 19
Thanks for the reply.
> I suppose you can set up a contrast matrix that would make the intercept
> equal to the overall mean, but the definition would depend on the group sizes.
Any suggestion on how to set up one?
Erlis
__
R-help@r-project.org
I don't seem able to get my code to execute in SAS. Specifically, only the
first line in the region seems to be transmitted. Could the fact it is in a
file that does not have a .sas extension be a factor? Also, I have problems
exiting. BTW, http://ess.r-project.org/Manual/news.html doesn't
On Thu, 10 Nov 2016, danilo.car...@uniparthenope.it wrote:
Thank you for your hints, now the goodness of fit test provides me good
results, but surprisingly for me the three-component model turns out to be
worse than the two-component one (indeed, I focused on the three-component
mixture
> On 10 Nov 2016, at 16:30 , erlis ruli wrote:
>
> Here comes the questions. Is it possible to modify Helmert contrasts in order
> to use weighted means instead of means of means?
I think not. Not in general. I suppose you can set up a contrast matrix that
would make
Hello,
Just read the help page ?paste, in particular the argument collapse.
x <- LETTERS[1:9]
paste(x, collapse = "")
Hope this helps,
Rui Barradas
Em 10-11-2016 17:14, Ferri Leberl escreveu:
Dear All,
If I have a vector V consisting of 9 strings — how can I paste them into a
single
Dear All,
If I have a vector V consisting of 9 strings — how can I paste them into a
single string without programming a loop?
Thank you in advance!
__
R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see
Given these two kinds of data set, and data set2 was obtained through the
weighted Euclidean formula. Can we estimate the weight parameter for each
variable in R based on the steepest descent method ? Thanks very much.
data set 1:
9 164 78 0 0 32.8 0.148 45
4 134 72 0 0 23.8 0.277 60
5 166 72 19
Hi all!
Suppose that we have got a response y and an unbalanced treatment x with three
levels or groups.
The treatment is unbalanced by design. Indeed, the first group has 3
replications and the other two have two replications each.
For instance, in R the data might look like this:
y =
I think you answered your own question. For loops are not a boogeyman... poor
memory management is.
Algorithms that are sensitive to evaluation sequence are often not very
re-usable, and certainly not parallelizable. If you have a specific algorithm
in mind, there may be some advice we can
You are mistaken. apply() is *not* vectorized. It is a disguised loop.
For true vectorization at the C level, the answer must be no, as the
whole point is to treat the argument as a whole object and hide the
iterative details.
However, as you indicated, you can always manually randomize the
nBuyMat <- data.frame(matrix(rnorm(28), 7, 4))
nBuyMat
nBuy <- nrow(nBuyMat)
sample(1:nBuy, nBuy, replace=FALSE)
sample(1:nBuy)
sample(nBuy)
?sample
apply(nBuyMat[sample(1:nBuy,nBuy, replace=FALSE),], 1, function(x) sum(x))
apply(nBuyMat[sample(nBuy),], 1, function(x) sum(x))
The defaults for
Is there a way to use vectorization where the elements are evaluated in a
random order?
For instance, if the code is to be run on each row in a matrix of length nBuy
the following will do the job
for (b in sample(1:nBuy,nBuy, replace=FALSE)){
}
but
apply(nBuyMat, 1, function(x))
will be
Thank you for your hints, now the goodness of fit test provides me
good results, but surprisingly for me the three-component model turns
out to be worse than the two-component one (indeed, I focused on the
three-component mixture because the two-component one exhibits a low
p-value).
In
16 matches
Mail list logo