Just to add my personal cent to this: I've had similar issues with an R
package some time ago, which kept crashing somewhat unpredictably in the
Solaris tests.
Debugging was hard because it only happened on Solaris, but in the end it
turned out to be due to serious bugs in the code that only
>
> I don't understand. --
>
> 7%%2=1
> 9%%2=1
> 11%%2=1
>
> What aren't these numbers printing ?
>
> num<-0
> for (i in 1:100){
> num<-num+i
> if (num%%2 != 0)
> print(num)
> }
Your code tests the numbers
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, …
and correctly prints the odd
> On 30 Nov 2020, at 10:41, Steven Yen wrote:
>
> Thanks. I know, my point was on why I get something printed by simply doing
> line 1 below and at other occasions had to do line 2.
>
> me.probit(obj)
That means the return value of me.probit() has been marked as invisible, so it
won't
> On 10 Aug 2020, at 18:36, Bert Gunter wrote:
>
> But this appears to be imprecise (it confused me, anyway). The usual sense
> of "matching" in regex's is "match the pattern somewhere in the string
> going forward." But in the perl lookahead construct it apparently must
> **exactly** match
fisher.test() computes exact confidence intervals for the odds ratio.
> On 6 Jul 2020, at 15:01, Luigi Marongiu wrote:
>
> Is there a simple function from some package that can also add a
> p-value to this test? Or how can I calculate the p-value on my own?
> On 28 Apr 2020, at 05:59, Yakov Goldberg wrote:
>
> I have not mispelled it.
> That's how repo is added for Ubuntu.
> The problem is: there is no Release file in that folder.
But there is:
https://cloud.r-project.org/bin/linux/ubuntu/bionic-cran35/Release
So the problem must be
> On 27 Aug 2017, at 18:18, Omar André Gonzáles Díaz
> wrote:
>
> 3.- If I make the 2 first letter optional with:
>
> ecommerce$sku <-
> gsub("(.*)([a-zA-Z]?{2}[0-9]{2}[a-zA-Z]{1,2}[0-9]{2,4})(.*)", "\\2",
> ecommerce$producto)
>
> "49MU6300" is capture, but again
> On 23 Aug 2017, at 07:45, Rolf Turner wrote:
>
> My reading of ?regex led me to believe that
>
>gsub("[:alpha:]","",x)
>
> should give the result that I want.
That's looking for any of the characters a, l, p, h, : .
What you meant to say was
> On 4 May 2017, at 20:13, Murat Tasan wrote:
>
> The only semi-efficient method I've found around this is to `apply` across
> rows (more accurately through blocks of rows coerced into dense
> sub-matrices of P), but I'd like to try to remove the looping logic from my
>
> On 30 Mar 2017, at 23:37, Kankana Shukla wrote:
>
> I have searched for examples using R and Python together, and rpy2 seems
> like the way to go, but is there another (easier) way to do it?
Rpy2 would seem to be a very easy and convenient solution. What do you need
> On 30 Mar 2017, at 11:51, Eshi Vaz wrote:
>
> When trying to computer a fisher’s exact test using the fisher.test function
> from the gmodels() package, <
The problem seems to be with a different fisher.test() function from the
gmodels package, not with
> On 4 Nov 2016, at 17:35, peter dalgaard wrote:
>
> Notice though, that Bert loses (or _should_ lose) for larger values of N,
> since that method involves O(N^3) operations whereas the other two are
> O(N^2). I am a bit surprised that sweep() is so inefficient even at
> On 8 Sep 2016, at 16:25, David L Carlson wrote:
>
> Sampling without replacement treats the sample as the population for the
> purposes of estimating the outcomes at smaller sample sizes. Sampling with
> replacement (the same as bootstrapping) treats the sample as one
> On 7 Sep 2016, at 00:07, Nick Pardikes wrote:
>
> Is there any way to use rarecurve to resample a community (row) with
> replacement the same number of times for all 50 communities? With
> replacement is important because the communities differ greatly in their
> size
Dear Gonçalo,
thanks for the additional information – I think I get now what you're trying to
do.
> On 27 Jun 2016, at 06:35, Gonçalo Ferraz wrote:
>
> probabilities in lpvec should be <=1, but it is not. The sum is something on
> the order of 1.48e-13.”
> It is
Why do you want to do this? Why not simply use Fisher's exact test?
N <- 2178
N1 <- 165
N2 <- 331
J <- 97
ct <- rbind(c(J, N1-J), c(N2-J, N-N1-N2+J))
fisher.test(ct)
Background explanation:
- Your formula computes the log hypergeometric probability for a contingency
table as ct above, but
> On 13 Jan 2016, at 02:50, tomdhar...@gmail.com wrote:
>
> So my question is: How can the rows of a large sparse matrix be
> efficiently scaled?
If you're not picky about the particular storage format, the "wordspace" package
http://wordspace.r-forge.r-project.org/
has an efficient
> On 24 Nov 2015, at 13:32, Duncan Murdoch wrote:
>
>> Perhaps it would make sense always to use the well-known standard XQuartz
>> paths on Mac and only consider other locations if explicitly asked for by
>> the user?
>
> If rgl is using non-standard features in
> On 23 Nov 2015, at 11:50, Duncan Murdoch wrote:
>
> The OSX binary version of rgl on CRAN is ancient. You'll need to reinstall
> it from source for a current one.
Since you bring up this point: any chance of getting Mac binaries from CRAN
again? Rgl is a
I don't think Benjamin should use the zipfR package just for
these functions [and even the zipfR package help page on these
can be read as saying so .. ]
Exactly. They are simply there because it's much easier to write and read code
with wrappers that parametrize the incomplete Beta and
If your null hypothesis is that the probability of a success is 0.6, i.e. H0:
p=0.6, then those
(a) Let's also assume we have an H1 that there are more than 6
successes
(b) Now let's assume we have an H1 that there are fewer than 6
successes
(1). My understanding would be that, if we
On 8 Dec 2014, at 21:21, apeshifter ch_k...@gmx.de wrote:
The last relic of the afore-mentioned for-loop that goes through all the
word pairs and tries to calculate some statistics on them is the following
line of code:
typefreq.after1[i]-length(unique(word2[which(word1==word1[i])]))
(where
You probably told R to write out the file as a single long line with fields
separated alternately by 380 TABs and one newline � that�s what the ncol
argument does (write is just a small wrapper around cat()).
cat() doesn�t print lines that are longer than 2 GiB, so it will insert an
extra \n
On 24 Apr 2014, at 23:56, Greg Snow 538...@gmail.com wrote:
library(Matrix)
adjM - Matrix(0,nrow=10,ncol=10)
locs - cbind( sample(1:10), sample(1:10) )
vals - rnorm(10)
adjM[ locs ] - vals
... and once you've got your data in this format, why not construct the sparse
matrix
Your report sounds somewhat similar to problems I encountered with OpenBLAS on
Ubuntu Linux (which is a maintained version of GotoBLAS; I couldn't get the
latter to compile properly).
OpenBLAS uses OpenMP for parallelization. Once linked into R, other
OpenMP-based code would only use a single
Sounds like you want a 95% binomial confidence interval:
binom.test(N, P)
will compute this for you, and you can get the bounds directly with
binom.test(N, P)$conf.int
Actually, binom.test computes a two-sided confidence interval, which
corresponds roughly to 2.5 and 97.5
On 3 Oct 2013, at 22:39, Monaghan, David dmonag...@gc.cuny.edu wrote:
I was wondering, has anyone has encountered an R package that performs random
projection/random mapping? RP is a procedure that is akin to Principal
Components Analysis in that it accomplishes dimensionality reduction,
On 8 Apr 2013, at 23:21, Andy Cooper andy_coope...@yahoo.co.uk wrote:
So, no one has direct experience running irlba on a data matrix as large as
500,000 x 1,000 or larger?
I haven't used irlba in production code, but ran a few benchmarks on much
smaller matrices. My impression was (also
sh: de: command not found
ERREUR : loading failed for ‘i386’, ‘x86_64’
* removing ‘/Users/marcgirondot/Documents/Espace de travail
R/Phenology/Source fit/Phenology Package/phenology.Rcheck/phenology’
Looks to me like it might be an issue with whitespace in the directory path.
In
On 12 Oct 2012, at 09:46, Purna chander wrote:
4) scenario4:
x-read.table(query.vec)
v-read.table(query.vec2)
v-as.matrix(v)
d-dist(rbind(v,x),method=manhattan)
m-as.matrix(d)
m2-m[1:nrow(v),(nrow(v)+1):nrow(x)]
print(m2[1,1:10])
time taken for running the code:
real0m0.445s
It so happens I have been looking at very similar changes, as well as
adding multi-threading support for dist(); these should make it into
R-devel later this summer.
That's good to hear! I was thinking about giving OpenMP a try in my package,
but am not sure whether it's worth the overhead
I'm working on analyzing a large data set, lets asume that
dim(Data)=c(1000,8700). I want to calculate the canberra distance
between the columns of this matrix, and using a toy example ('test' is
a matrix filled with random numbers 0-1):
system.time(d-as.matrix(dist(t(test), method =
Oh, what is this world coming to when you can't count on laziness to
be lazy. ;) I should probably stop reading about Haskell and their
lazy way of doing things.
Haskell would still have to check an infinite number of potential files on
your hard disk, because it can't now when it's seen
Is anyone aware of a fast way of doing fisher's exact test for a series of 2
x 2 tables in R? The fisher.test is really slow if n1=1000 and n2 = 1000.
If you don't require exact two-sided p-values (determined according to a
likelihood criterion as in fisher.test), you can use the vectorised
), but vecLib BLAS beats CUDA by a
factor of 2.
Kudos to the gputools developers: despite what the README says, the package
compiles out of the box on Mac OS X 10.6, 64-bit R 2.12.1, with CUDA release
3.2. Thanks for this convenient package!
Best regards,
Stefan Evert
[ stefan.ev...@uos.de
If I understood you correctly, you have this matrix of indicator variables for
occurrences of terms in documents:
A - matrix(c(1,1,0,0,1,1,1,0,1,1,1,0,0,0,1), nrow=3, byrow=TRUE,
dimnames=list(paste(doc,1:3), paste(term,1:5)))
A
and want to determine co-occurrence counts for pairs of
Hi!
Using R, I plotted a log-log plot of the frequencies in the Brown Corpus
using
plot(sort(file.tfl$f, decreasing=TRUE), xlab=rank, ylab=frequency,
log=x,y)
However, I would also like to add lines showing the curves for a Zipfian
distribution and for Zipf-Mandelbrot.
It's fairly
rev52157
language R
version.string R version 2.11.1 (2010-05-31)
but have been looking at the current R-devel source code, so I suspect my
problem won't just go away with the next release.
Best regards,
Stefan Evert
On 24 Aug 2010, at 02:20, Ben Bolker wrote:
Please, is there an R function /package that allows for 3D stairway plots
like the attached one ?
In addition, how can I overlay a parametric grid plot??
Not exactly, that I know of, but maybe you can adapt
library(rgl)
demo(hist3d)
to
On 30 Jul 2010, at 19:22, Ian Bentley wrote:
I've got two persp plots with Identical X and Y's, and I'd like to plot them
on the same graph, so that it is obvious where one plot is above the other.
I can't find any mention of this anywhere. Do I need to use wireframe?
You can do it with
On 16 Dec 2009, at 21:40, Ravi Varadhan wrote:
?pbeta
And the zipfR package wraps these in terms of the usual terminology
for incomplete/regularised upper/lower Beta functions (see ?Rbeta
there), for people like me who can't get their head around the
equivalence between the Beta
Indeed, it seems that the author of zipfR has neither been aware
that the (scaled / aka regularized) incomplete gamma (and beta,
for that matter!) functions have been part of R all along.
...
... well , inspecting his code reveals he did know it.
But why then on earth provide all the
I would be very grateful if you could help me with:
Given the regularized gamma function Reg=int_0^r (x^(k-1)e^(-x))dx/
int_0^Inf (x^(k-1)e^(-x))dx ; 0rInf (which is eventually the ratio
of the
Incomplete gamma function by the gamma function), does anyone know
of a package in R that would
Sure, badly written R code does not perform as well as well written
python code or C code. On the other hand badly written python code
does not perform as well as well written R code.
What happens when you try one of these :
sum - sum( 1:N )
R runs out of memory and crashes. :-) I didn't
My hunch is that Python and R run at about the same speed, and both
use C libraries for speedups (Python primarily via the numpy package).
That's not necessarily true. There can be enormous differences
between interpreted languages, and R appears to be a particularly slow
one (which
I would like to have the output sorted in descending order by height
or frequency.
But when I do the following:
rev(table(xx))
xx
T G C A
15 10 12 13
Err, I guess you meant to write
sort(table(xx))
here?
Cheers,
Stefan
__
,
Author = {Agresti, Alan},
Journal = {Statistical Science},
Number = 1,
Pages = {131--153},
Source = {JSTOR},
Title = {A Survey of Exact Inference for Contingency Tables},
Volume = 7,
Year = 1992}
Best regards,
Stefan Evert
[ stefan.ev
On 23 Aug 2009, at 20:26, Uwe Ligges wrote:
Since it looks like nobody answered so far:
Your code is not reproducible, we do not have rfc, y, zVals nor
NoCols.
It's much easier to reproduce: just type in the first example from the
image help page
x - y - seq(-4*pi, 4*pi,
Your code is not reproducible, we do not have rfc, y, zVals nor
NoCols.
It's much easier to reproduce: just type in the first example from
the image help page
x - y - seq(-4*pi, 4*pi, len=27)
r - sqrt(outer(x^2, y^2, +))
image(z = z - cos(r^2)*exp(-r/6), col=gray((0:32)/32))
then
(see http://www.rforge.net/Rserve/)
.
Best,
Stefan Evert
[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
the little r comes from littler [1]. it doesn't claim to be larger
than
it is.
[1] http://dirk.eddelbuettel.com/code/littler.html
Unless you're working on a Mac or Windows (if I'm not mistaken about
case handling there), where it will just overwrite the standard R
interpreter. :-(
Dear Maria,
this is quite probably my faul, in some way. The UCS software has
been abandoned a bit recently, as I'm planning to rewrite it into a
pure R package. On the other hand, I still use the software
occasionally, so it should work with recent R versions.
I am trying to install
On 26 Feb 2009, at 14:14, Max Kuhn wrote:
Do you know about any good reference that discusses kappa for
classification and maybe CI for kappa???
You might also want to take a look at this survey article on kappa and
its alternatives:
Artstein, Ron and Poesio, Massimo (2008). Survey
can you please stop repeating this nonsense? I don't think anybody
ever claimed that vectors can be considered list.
yes, it is nonsense. yes, there is one person who repeatedly made
this
claim. please read the archives; specifically, [1]. note this
statement:
Note that any
I ran into a similar issue with a simple benchmark the other day,
where a plain loop in Lua was faster than vectorised code in R ...
hmm, would you be saying that r's vectorised performance is overhyped?
or is it just that non-vectorised code in r is slow?
What I meant, I guess, was
(f.rep(10, 100))
user system elapsed
4.109 0.028 4.172
system.time(f.pat(10, 100))
user system elapsed
1.580 0.134 1.739
Best regards,
Stefan Evert
[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]
PS: Don't feed trolls who say that Lua is better than R
to know which of the numbers are large in order to
do this right ...
Best regards,
Stefan Evert
[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read
when reading R's fullrefman.pdf (available from
http://cran.r-project.org/doc/manuals/fullrefman.pdf) in Mac OS X's
preview.app (version 4.1, on Mac OS 10.5.x), if i try to do a keyword
search within the document, the indexing step freezes about 2/3 the
way through the progress bar. this
it
even smaller if the fitted function is much larger than the actual y
values, so all differences are negative).
You probably wanted to minimise the squared errors:
sum((y - b/(2*pi*a^2*gamma(2/b))*exp(-(x/a)^b)))^2)
Best regards,
Stefan Evert
[ stefan.ev...@uos.de | http
our zipfR package available from CRAN. The tutorial and
background materials at
http://zipfR.R-Forge.R-project.org/
should help you to get started and will also explain how to calculate
the coefficients of Zipf's law from estimates of the model parameters.
Best regards,
Stefan Evert
Well, I suppose he might get away with it as long as said *Brian
Ripley* doesn't read this list ... ;-)
Sorry, couldn't resist,
Stefan
On 28 Dec 2008, at 21:07, stephen sefick wrote:
Is that an appropriate request?
On Sun, Dec 28, 2008 at 2:14 PM, Marcus Vinicius mvi...@gmail.com
wrote:
Hi Stefan! :-)
From tools where negative lookbehind can involve variable lengths,
one
would think this would work:
grep((?!(?:\\1|^))(.)\\1{1,}$, vec, perl=T)
But then R doesn't like it that much ...
It's really the PCRE library that doesn't like your regexp, not R.
The problem is
But is there a one-line grep thingy to do this?
Can't think of a one-liner, but a three-line solution you can easily
enough wrap in a small function:
vec-c(, baaa, bbaa, bbba, baamm, aa)
idx.1 - grep((.)\\1$, vec)
idx.2 - grep(^(.)\\1*$, vec)
vec[setdiff(idx.1, idx.2)]
Oops, my bad,
I'm sorry but I don't quite understand what not running solve() in
this process means. I updated the code and it do show that the result
from clusterApply() are identical with the result from lapply(). Could
you please explain more about this?
The point is that a parallel processing framework
How about this?
as.numeric(factor(unlist(strsplit(ECX, )), levels=LETTERS))
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
(x) == 0
do the trick? Or am I missing something?
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting
-line options are:
pstoimg -type png -depth 24 -antialias -scale 2 plot.eps
Use the -scale option to generate the desired bitmap size.
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org
in thinking that R
only works with double precision?)
According to the nVidia Web site, the Tesla architecture is _ten
times_ slower for double-precision operations than for single-
precision, which makes it seem far less amazing than at first sight.
Best regards,
Stefan Evert
[ [EMAIL PROTECTED
On 20 Oct 2008, at 22:57, (Ted Harding) wrote:
I'm wondering if there's a compact way to achieve the
following. The dream is that one could write
rep(c(0,1),times=c(3,4,5,6))
which would produce
# [1] 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1
in effect recycling x through 'times'.
rep2 -
will usually mess up your bounding box and revert to a
standard page size.
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do
into the same trap more than once, and
painfully, myself (often in connection with MySQL).
Best wishes, and with a tiny grain of salt,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
[[alternative HTML version deleted
elapsed
0.508 0.552 1.054
I suppose that's the fastest you can get because of R's copy-on-write
semantics (if I understand the R internals correctly, it's always a
bit magical to me ...)
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert
(M, beside=TRUE, legend = TRUE)
Best regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org
regards,
Stefan Evert
[ [EMAIL PROTECTED] | http://purl.org/stefan.evert ]
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented
And you may want to consider using the 'cut' function. In your case,
something like
veg_mean - cut(veg_mean, breaks=c(0,.1,1,2,5,10,25,50,75,95,100),
right=FALSE)
should do the trick (see ?cut for more options).
Best,
Stefan
On 28 Jul 2008, at 19:52, Henrik Bengtsson wrote:
Use ''
75 matches
Mail list logo