On 05/02/2014 05:44, Dinesh wrote:
Hi,
I am trying to build R-2.15.3 and get the following error in
/configure --with-readline --enable-R-shlib
snip
checking for dummy main to link with Fortran 77 libraries... unknown
configure: error: in `/home/dinesh/R-build/R-2.15.3':
configure: error:
On Wed, Feb 5, 2014 at 6:32 AM, Liviu Andronic landronim...@gmail.com wrote:
So in the end my proposal is not necessarily for r-help to go to SE,
but more for R to have its own QA forum/wiki for helping R users.
This could perfectly take the form of setting up its own open-source
And the winner is ... (drum roll) ... DAVID WINSEMIUS!!!
Thank you hugely David. You have completely solved my problem.
The last bit with format.hexmode() so as to get 00A3 rather than
just a3 was actually unnecessary; I could've lived with a3. But it
was a nice bit of polish.
How
Hi David,
In CSV RFC 4180 format if any ' or character is there then character will
go with escape character so CSV will distinguish properly.
I will try with read.fwf once because with redline I am facing same issue.
Thanks Regards,
D V Kiran Kumar.
On Wed, Feb 5, 2014 at 3:14 AM, David
On 14-02-04 11:56 PM, David Winsemius wrote:
On Feb 4, 2014, at 4:57 PM, Rolf Turner wrote:
If I have a character such as £ stored in a object called xxx, how can I obtain the
hex code representation of this character? In this case I know that the hex code is \u00A3, but
if I didn't, how
On Feb 03, 2014; 7:24am, hadley wickham wrote:
Or you're not running a postgres db on your local machine with that
accepts a connection with username Administrator and no password? I
doubt that's the error you would see if RPostgreSQL hadn't found
libpq.
I have learned enough since this
Dear recipient
I am using the r-program to do calculations with my data.
I would like to ask you how to calculate the expected value with R
using this data? I only get an error message in response. Attached is
my data.
Thank you for your kind answer!
Henrik Alanko
Faculty of statistics
Greg,
The deviance being chi^2 distributed on the residual degrees of freedom
works in some cases (mostly where the response itself can be reasonably
approximated as Gaussian), but rather poorly in others (noteably low
count data). This is what you are seeing in your simulations - in the
Hi Kiran,
Please post a reproducible example, either by pasting a sample of
comma separated values into you message, posting a .csv file somewhere
where we can download it. Without an example all we can do is guess
what your problem might be.
Best,
Ista
On Wed, Feb 5, 2014 at 5:10 AM, Venkata
On 05-02-2014, at 10:46, Henrik Alanko henrik.ala...@helsinki.fi wrote:
Dear recipient
I am using the r-program to do calculations with my data.
I would like to ask you how to calculate the expected value with R using this
data? I only get an error message in response. Attached is my
I am having some difficulties with effects plots from the effects package.
Your problem is probably not xlim, which the effects plots should accept; it's
more likely that you're trying to use base graphics to add symbols to a lattice
plot - effects uses lattice, not base graphics. The
Hi
try getwd()
Is your file in this directory?
Then you can change directory by menu item file/change directory
or you can try file.choose() for interactive selection of files.
you can also check loaded objects by
ls()
Regards
Petr
-Original Message-
From:
On 05/02/2014 1:32 AM, Liviu Andronic wrote:
Dear all,
On Sun, Feb 2, 2014 at 10:49 PM, Liviu Andronic landronim...@gmail.com wrote:
It seems that StackOverflow is officially proposing user-generated
content for download/mirroring:
Thanks Peter
On Wed, Jan 15, 2014 at 7:23 PM, peter dalgaard pda...@gmail.com wrote:
The data have no resemblance to what prop.trend.test expects (counts and
totals, optionally group scores).
The prop.trend.test() function tests for trend in proportions. Were you
attempting a rank
Hi Greg,
Yes, this sounds right - with quasipoisson gam uses `extended
quasi-likelihood' (see McCullagh and Nelder's GLM book) to allow
estimation of the scale parameter along with the smoothing parameters
via (RE)ML, and it could well be that this gives a biased scale estimate
with low
thanks Simon
also, it appears at least with ML that the default scale estimate with
quasipoisson (i.e. using dev) is the scale which minimises the ML value of
the fitted model. So it is the best model but doesn't actually give the
correct mean-variance relation. Is that right?
thanks again
Greg
Completing the reverse engineering effort is the principle barrier to fully
incorporating the sas7bdat file format. Of course, SAS may change the
format specification at any time, and without our knowledge. The sas7bdat
package is a repository for the results of our (myself, Clint Cummins, and
On 05/02/2014 12:56, Greg Dropkin wrote:
thanks Simon
also, it appears at least with ML that the default scale estimate with
quasipoisson (i.e. using dev) is the scale which minimises the ML value of
the fitted model. So it is the best model but doesn't actually give the
correct mean-variance
Hi all,
I have performed a binomial test to verify if the number of males in a study is
significantly different from a null hypothesis (say, H0:p of being a male= 0.5).
For instancee:
binom.test(10, 30, p=0.5, alternative=two.sided, conf.level=0.95)
Exact binomial test
data: 10 and 30
Hi,
Try:
data.frame(table(dat$Date))
A.K.
Hi guys, I have some 20,000 observations like this:
Date
2014-01-01 00:00:00
2014-01-02 11:00:00
2014-01-02 22:00:00
2014-01-03 03:00:00
I want to perform an hourly count (e.g. the frequency occur at
each hour for a specific
Suppose I'm creating a function that sets default ylab and xlab behaviors:
plotx = function(x, y, ...){
plot(x,y, ylab=, xlab=,...)
}
The problem is, on occasion, I actually want to override the defaults
in my function.
Make xlab and ylab arguments to your function.
plotx2 -
Suppose I'm creating a function that sets default ylab and xlab behaviors:
plotx = function(x, y, ...){
plot(x,y, ylab=, xlab=,...)
}
The problem is, on occasion, I actually want to override the defaults
in my function. I would like to do the following:
plotx(1:100, 1:100, xlab=I Don't
That's a good idea, but I'm hoping there's another way. The actual
function that I'm using sets LOTS of default behaviors and I try to
minimize the number of arguments I have.
On Wed, Feb 5, 2014 at 10:19 AM, William Dunlap wdun...@tibco.com wrote:
Suppose I'm creating a function that sets
On Feb 5, 2014, at 10:05 AM, Simone misen...@hotmail.com wrote:
Hi all,
I have performed a binomial test to verify if the number of males in a study
is significantly different from a null hypothesis (say, H0:p of being a male=
0.5).
For instancee:
binom.test(10, 30, p=0.5,
Hi,
Try:
plotx - function(x,y, ...){
labels - list(xlab=x,ylab=y)
args - modifyList(labels,list(x=x,...))
do.call(plot,args)
}
plotx(1:100,1:100,xlab=I Don't Work!)
A.K.
On Wednesday, February 5, 2014 11:14 AM, Dustin Fife fife.dus...@gmail.com
wrote:
Suppose I'm creating a function
Perfect. Thanks!
On Wed, Feb 5, 2014 at 10:35 AM, arun smartpink...@yahoo.com wrote:
Hi,
Try:
plotx - function(x,y, ...){
labels - list(xlab=x,ylab=y)
args - modifyList(labels,list(x=x,...))
do.call(plot,args)
}
plotx(1:100,1:100,xlab=I Don't Work!)
A.K.
On Wednesday,
I'm not sure if I understand exactly what you mean, but if you want
separate panels, one above the other, with a common time span but each
with their own Y scale, then the function tfplot() in package tfplot
does this. There are examples in the User Guide at
On Feb 5, 2014, at 12:53 AM, Rolf Turner wrote:
And the winner is ... (drum roll) ... DAVID WINSEMIUS!!!
Thank you hugely David. You have completely solved my problem.
The last bit with format.hexmode() so as to get 00A3 rather than
just a3 was actually unnecessary; I could've
Hi ,
I read your post and followed your instructions but still couldn't install
RMySQL by getting the message: installation of package âRMySQLâ had
non-zero exit status
Can you please walk me through the process from the installation part (version,
etc) to all the paths that need to be
hi Simon
yes, I also got the right shape of the mean-variance relation but the
wrong estimate of the parameter.
thanks very much
Greg
Hi Greg,
Yes, this sounds right - with quasipoisson gam uses `extended
quasi-likelihood' (see McCullagh and Nelder's GLM book) to allow
estimation of the
Thank you David.
I tried the Table function with two columns
table(R_format$Client.Mnemonic,R_format$Tasks)
and got something like the one I have attached in this word file
Task_Summary_for_Clients.docx
http://r.789695.n4.nabble.com/file/n4684797/Task_Summary_for_Clients.docx
I need to
Dear Prof Ripley
yes, but if the estimate is biased it's good to know what the bias is.
The problem illustrated in the simulations has nothing to do with ML,
though, as the default fitting method in mgcv when scale is unknown is
GCV and that is what was used, by default, here.
The point about
On Feb 5, 2014, at 4:17 AM, S Ellison wrote:
I am having some difficulties with effects plots from the effects package.
Your problem is probably not xlim, which the effects plots should accept;
it's more likely that you're trying to use base graphics to add symbols to a
lattice plot -
On Feb 5, 2014, at 2:10 AM, Venkata Kirankumar wrote:
Hi David,
In CSV RFC 4180 format if any ‘ or “ character is there then character will
go with escape character so CSV will distinguish properly.
I will try with read.fwf once because with redline I am facing same issue.
Thanks
Hi,
Try ?duplicated()
apply(x,2,function(x) {x[duplicated(x)]-;x})
A.K.
Hi all,
I have a dataset of around a thousand column and a few thousands
of rows. I'm trying to get all the possible combinations (without
repetition) of the data columns and process them in parallel. Here's a
I don't think you answered the OP's query, although I confess that I
am not so sure I understand it either (see below). In any case, I
believe the R level loop (i.e. apply()) is unnecessary. There is a
unique (and a duplicated()) method for data frames, so simply
unique(x)
returns a data frame
Hello,
Please try including panel=panel.levelplot.raster as an option of levelplot:
svg(SVG.svg)
levelplot(prt, main=SVG, xlab=NULL, ylab=NULL,
col.regions=rgb.palette(800), cuts=100, at=seq(0,1.0,0.01),
panel=panel.levelplot.raster)
dev.off()
It may help you.
Regards,
Pascal
On 6
Dear all,
My data is :
a - c(0.9721,0.9722,0.9730,0.9723,0.0,0.0,0.0,0.9706,0.9698,0.0,0.9710,0.9699)
I want to replace zeros with average of before and after values of them. But
sometimes there is one zero sometimes more than one. What is the most elegant
way to do this ?
Thanks a lot
Hello,
If you mean replacing 0 by the average of non-zero values, I guess one way is:
a[a==0] - mean(a[a!=0])
Maybe some senior user might correct it.
Regards,
Pascal
On 6 February 2014 12:05, ce zadi...@excite.com wrote:
Dear all,
My data is :
a -
You seem to be treating zeroes as unknown values. Perhaps you should consider
setting them to NA and using the na.approx function from the zoo package.
---
Jeff NewmillerThe . . Go
In fact I don't want to replace with average of whole series, just before and
after values something like this :
for (i in seq(from=2,to=nrow(a)-1,by=1))
{
if ( a[i] == 0 ) a[i] = ( a[i+1] + a[i-1] ) / 2
}
but I can't handle gracefully repeated zeros or first and last values.
Jeff
Yes , indeed this is what I am looking for :
a[ a == 0.0 ] = NA
na.approx(a,na.rm=FALSE)
Thanks a lot.
-Original Message-
From: Jeff Newmiller [jdnew...@dcn.davis.ca.us]
Date: 02/05/2014 10:35 PM
To: ce zadi...@excite.com, r-help@r-project.org
Subject: Re: [R] replacing zeros with
HI,
May be this helps:
a1 -a
indx - which(c(0,diff(a!=0)) 0)
indx1 - which(c(0,diff(a!=0)) 0)
a[a==0] - rep( rowMeans(cbind(a[indx-1],a[indx1])),indx1-indx)
a
# [1] 0.97210 0.97220 0.97300 0.97230 0.97145 0.97145 0.97145 0.97060 0.96980
#[10] 0.97040 0.97100 0.96990
#Another option is
Dear Prof Therneau,
I have explored a bit more on this issue - I found that for this specific
problem,
one can get the older convergent behavior with just setting toler.chol to 4
e-12.
This is only a little larger than 2x the current default =
.Machine$double.eps^.75
= 1.8e-12.
Since you
Hi,
Emails
mal...@gmail.com
mah...@gmail.com
ravi_...@yahoo.com
lavk@rediff.com
I need split firstname,lastname,domail(only gmail,not gmail.com),also 123 in
last name,so please give me help
Output is
Emails f.name l.namedomain
Hello all, can help clarify something?
According to R's lm() doc:
Non-NULL weights can be used to indicate that different observations
have different variances (with the values in weights being inversely
*proportional* to the variances); or equivalently, when the elements
of weights are
Hi,
May be this helps:
dat - read.table(text=Emails
mal...@gmail.com
mah...@gmail.com
ravi_...@yahoo.com
lavk@rediff.com,sep=,header=TRUE,stringsAsFactors=FALSE)
Hi,
If you have .edu, .gov etc.
dat - structure(list(Emails = c(mal...@gmail.com, mah...@gmail.com,
ravi_...@yahoo.com, lavk@ufl.edu)), .Names = Emails, class =
data.frame, row.names = c(NA,
-4L))
res -
48 matches
Mail list logo