[R] Weighted Average application on Summary Dataset

2010-06-12 Thread RaoulD

Hi,

I have 2 huge datasets - May and Jun - a miniscule sample of one is given
below. I am trying to do 2 things with these datasets. I need to verify if
the weighted average of variable A for a Reason in Jun is same/different
from the same for May. To do this I am first computing the weighted average
for each SubReason using a function I wrote. 

Where I need help is applying the function on both the datasets to arrive at
weighted averages for each SubReason. Then, I would like to know what the
best way would be, to compare the weighted average for a sub reason across 2
datasets to be able to state that there is a difference - t-test,ANOVA?
Would greatly appreciate any help!! The function I wrote for weighted
average computation is given below the dataset.

One of the datasets:

Reason  SubReasonA  N
A  SR11115  29
B  SR2734   24
B  SR21054  31
A  Sr1600   43
A  SR31033  60
A  Sr11163  30
B  SR4732   43
B  SR4988   70
A  SR3569   25
B  SR41073  65

Output I require:
R   SR  WA_A   N (Sum of N)
A   SR1912.0098  102
SR3896.5294118   85
B   SR2914.3636364   55
SR4957.1966292   178
(Weighted Average 
of A for N weights)

# FUNCTION TO CALCULATE THE WEIGHTED AVERAGE FOR A WEIGHTED BY N   
WA<-function(A,N) {
 sp_A<-c(A %*% N)
 sum_N<-sum(N)
 WA<-sp_A/sum_N   
 return(WA)  
 }

Thanks in advance!
Raoul




-- 
View this message in context: 
http://r.789695.n4.nabble.com/Weighted-Average-application-on-Summary-Dataset-tp2253239p2253239.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] HOW to install RSQLite database

2010-06-12 Thread vijaysheegi

Yes i am asking how to install RSQLite packagein windows.Please help on this
regards

On 6/11/10, david.jessop [via R] <
ml-node+2251498-1601505055-288...@n4.nabble.com>
wrote:
>
> Are you asking how to install the RSQLite package or how to create a
> SQLite database?  The two are somewhat distinct questions. RSQLite is
> just a package of functions for R to be able to access data in an SQLite
> database. There isn't a separate SQLite program - just a library that is
> compiled into RSQLite.
>
> Regards
>
> David
>
>
> -Original Message-
> From: [hidden 
> email][mailto:[hidden
> email] ]
> On Behalf Of vijaysheegi
> Sent: 10 June 2010 16:22
> To: [hidden email] 
> Subject: [R] HOW to install RSQLite database
>
>
> Please let me know where i have to type below thing to  RSQLite database
> get installed.Please let me know the solution.Thanks in advance
>
>
>
>
>
> RSQLite -- Embedding the SQLite engine in R
>
> (The RSQLite package includes a recent copy of the SQLite distribution
> from http://www.sqlite.org .)
>
> Installation
> 
>
> There are 3 alternatives for installation:
>
> 1. Simple installation:
>
>   R CMD INSTALL RSQLite-.tar.gz
>
>the installation automatically detects whether SQLite is
>available in any of your system directories;  if it's not
>available, it installs the SQLite engine and the R-SQLite
>interface under the package directory $R_PACKAGE_DIR/sqlite.
>
> 2. If you have SQLite installed in a non-system directory (e.g,
>in $HOME/sqlite),
>
>a) You can use
>
>   export PKG_LIBS="-L$HOME/sqlite/lib -lsqlite"
>   export PKG_CPPFLAGS="-I$HOME/sqlite/include"
>
>   R CMD INSTALL RSQLite-.tar.gz
>
>b) or you can use the --with-sqlite-dir configuration argument
>
>   R CMD INSTALL --configure-args=--with-sqlite-dir=$HOME/sqlite \
> RSQLite-.tar.gz
>
> 3. If you don't have SQLite but you rather install the version we
> provide
>into a directory different than the RSQLite package, for instance,
>$HOME/app/sqlite, use
>
>   R CMD INSTALL --configure-args=--enable-sqlite=$HOME/app/sqlite \
> RSQLite-.tar.gz
>
> Usage
> -
>
> Note that if you use an *existing* SQLite library that resides in a
> non-system directory (e.g., other than /lib, /usr/lib, /usr/local/lib)
> you may need to include it in our LD_LIBRARY_PATH, prior to invoking R.
>
> For instance
>
> export LD_LIBRARY_PATH=$HOME/sqlite/lib:$LD_LIBRARY_PATH
> R
> > library(help=RSQLite)
> > library(RSQLite)
>
> (if you use the --enable-sqlite=DIR configuration argument, the SQLite
> library is statically linked to the RSQLite R package, and you need not
> worry about setting LD_LIBRARY_PATH.)
>
>
>
> --
> View this message in context:
> http://r.789695.n4.nabble.com/HOW-to-install-RSQLite-database-tp2250604p
> 2250604.html
> Sent from the R help mailing list archive at Nabble.com.
>
> [[alternative HTML version deleted]]
>
> __
> [hidden email] mailing 
> list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
> Issued by UBS AG or affiliates to professional investors...{{dropped:30}}
>
> __
> [hidden email] mailing 
> list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>
> --
> View message @
> http://r.789695.n4.nabble.com/HOW-to-install-RSQLite-database-tp2250604p2251498.html
> To unsubscribe from HOW to install RSQLite database, click here< (link 
> removed) =>.
>
>
>


-- 
vijayamahantesh

-- 
View this message in context: 
http://r.789695.n4.nabble.com/HOW-to-install-RSQLite-database-tp2250604p2253249.html
Sent from the R help mailing list archive at Nabble.com.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to output text to sink from data frame line by line without column names

2010-06-12 Thread Nevil Amos
OK I see how to remove the line numbers[1] etc by using cat instead of 
print, but cannot work out how to remove the column names from the data 
frame output


On 13/06/2010 4:21 PM, Nevil Amos wrote:
I want to output a text file assembeld from various soruces within r ( 
actually as a genepop file)


the output should be formatted line 1 "text comment"
   line 2:n  selected 
column names from data frame
line n+1on lines 
of selected columns  from data frame one row at a time



I have the following code, but cannot see how to remove the line 
numbers and omit column names form the line by line data frame output


col1<-c(2,45,67)
col2<-c("a","B","C")
col3<-c(234,44,566)
mydf<-as.data.frame(cbind(col1,col2,col3))
n<-ncol(mydf)
nr<-nrow(mydf)
sink("test.txt")

print("I will be including text of various sorts in this file so 
cannot use print table or similar command")

for (i in 1:n){
print(colnames(mydf[i]),quote=F) }
for (j in 1:nr){
print(mydf[j,c(2:n)],quote=F,row.names=F)}
sink()

The test.txt contains:

[1] "I will be including text of various sorts in this file so cannot 
use print table or similar command"

[1] col1
[1] col2
[1] col3
 col2 col3
a  234
 col2 col3
B   44
 col2 col3
C  566

what I would like in test.txt  is:

"I will be including text of various sorts in this file so cannot use 
print table or similar command"

col1
col2
col3
 a  234
 B   44
 C  566

Many thanks

Nevil Amos


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to output text to sink without initial line number [1], and data frame line by line without column names

2010-06-12 Thread Nevil Amos
I want to output a text file assembeld from various soruces within r ( 
actually as a genepop file)


the output should be formatted line 1 "text comment"
   line 2:n  selected 
column names from data frame
line n+1on lines of 
selected columns  from data frame one row at a time



I have the following code, but cannot see how to remove the line numbers 
and omit column names form the line by line data frame output


col1<-c(2,45,67)
col2<-c("a","B","C")
col3<-c(234,44,566)
mydf<-as.data.frame(cbind(col1,col2,col3))
n<-ncol(mydf)
nr<-nrow(mydf)
sink("test.txt")

print("I will be including text of various sorts in this file so cannot 
use print table or similar command")

for (i in 1:n){
print(colnames(mydf[i]),quote=F) }
for (j in 1:nr){
print(mydf[j,c(2:n)],quote=F,row.names=F)}
sink()

The test.txt contains:

[1] "I will be including text of various sorts in this file so cannot 
use print table or similar command"

[1] col1
[1] col2
[1] col3
 col2 col3
a  234
 col2 col3
B   44
 col2 col3
C  566

what I would like in test.txt  is:

"I will be including text of various sorts in this file so cannot use 
print table or similar command"

col1
col2
col3
 a  234
 B   44
 C  566

Many thanks

Nevil Amos

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help with R

2010-06-12 Thread Berend Hasselman


li li-13 wrote:
> 
> Hi all,
>I want to solve the following equation for x with rho <- 0.5
> 
>pnorm(-x)*pnorm((rho*dnorm(x)/pnorm(x)-x)/sqrt(1-rho^2))==0.05
> 
>Is there a function in R to do this?
> 

Or if you wish to try different values of rho

f <- function(x, rho) {
 pnorm(-x)*pnorm((rho*dnorm(x)/pnorm(x)-x)/sqrt(1-rho^2))-0.05 
}

uniroot(f,c(-3,3),rho=.5)
uniroot(f,c(-3,3),rho=.3)

/Berend

-- 
View this message in context: 
http://r.789695.n4.nabble.com/help-with-R-tp2253133p2253243.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scope and sapply

2010-06-12 Thread Worik R
I was careless.

Here is a better example of what I am trying to.  With the '<<-' you
offered.

?<<-

That was exactly what I needed, thankyou.

cheers
Worik



N <- 10
## x simulate a return series
x <- runif(N)-.5

## Build an array of cumulative returns of a portfolio starting with $1 as
it changes over time
y <- rep(0, length(x))
y[1] <- 1+1*x[1]
for(i in 2:N){
  y[i] <- y[i-1]+y[i-1]*x[i]
}

## y is that return series.  Use
test.1 <- function(r.in){
  v <- rep(0, length(r.in))
  foo <- function(i, r){
if(i == 1){
  s <- 1
}else{
  s <<- v[i-1]
}
v[i] <<- s + s*r[i]
return(v[i])
  }
  return(sapply(1:length(r.in), foo, r.in))
}

z <- test.1(x)
y
z

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] using latticeExtra plotting confidence intervals

2010-06-12 Thread Joe P King
I am wanting to plot a 95% confidence band using segplot, yet I am wanting
to have groups. For example if I have males and females, and then I have
them in different races, I want the racial groups in different panels. I
have this minor code, completely made up but gets at what I am wanting, 4
random samples and 4 samples of confidence, I know how to get A & B into one
panel and C&D in to another but how do I get the x axis to label them
properly and have it categorized as two. I am not sure what to have to the
left side of the formula. This is the example code:

 

library(lattice)

library(latticeExtra)

 

sample1<-rnorm(100,10,2)

sample2<-rnorm(100,50,3)

sample3<-rnorm(100,20,2)

sample4<-rnorm(100,40,1)

mu1<-mean(sample1)

ci.upper1<-mu1+2*2

ci.lower1<-mu1-2*2

mu2<-mean(sample2)

ci.upper2<-mu2+2*3

ci.lower2<-mu2-2*3

mu3<-mean(sample3)

ci.upper3<-mu3+2*2

ci.lower3<-mu3-2*2

mu4<-mean(sample4)

ci.upper4<-mu4+2*1

ci.lower4<-mu4-2*1

categories<-c("A","B")

 

mu<-cbind(mu1,mu2,mu3,mu4)

ci.upper<-cbind(ci.upper1,ci.upper2,ci.upper3,ci.upper4)

ci.lower<-cbind(ci.lower1,ci.lower2,ci.lower3,ci.lower4)

segplot(mu~ci.upper+ci.lower|categories, centers = mu, horizontal=FALSE)

 

I also tried this

 

seq1<-seq(1,4,1)

segplot(seq1~ci.upper+ci.lower|categories, centers = mu,horizontal=FALSE)

 

but it also gives poor x axis, I know this is probably an elementary problem
but any help would be greatly appreciated.

 

Heres my data structure, sorry for bombarding with all the code.

 

structure(c(9.85647167881417, 50.1856561919426, 19.8477661576365, 

39.8575819498655, 13.8564716788142, 56.1856561919426, 23.8477661576365, 

41.8575819498655, 5.85647167881417, 44.1856561919426, 15.8477661576365, 

37.8575819498655), .Dim = c(1L, 12L), .Dimnames = list(NULL, 

c("mu1", "mu2", "mu3", "mu4", "ci.upper1", "ci.upper2", "ci.upper3", 

"ci.upper4", "ci.lower1", "ci.lower2", "ci.lower3", "ci.lower4"

)))

---

Joe King, M.A.

Ph.D. Student 

University of Washington - Seattle

Office: 404A

Miller Hall

206-913-2912  

  j...@joepking.com

---

"Never throughout history has a man who lived a life of ease left a name
worth remembering." --Theodore Roosevelt

 


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Sim function

2010-06-12 Thread Dennis Murphy
Hi:

The book has an accompanying R package called arm. Within that package is a
function
named sim, which could be what you're looking for...

HTH,
Dennis

On Sat, Jun 12, 2010 at 6:02 PM, Noah Silverman wrote:

> I'm reading Gellman's book "Data Analysis Using Regression and
> Multilevel-Hierarchical Models"
>
> In Chapter 7 (and later), he makes frequent referent to a function names
> "sim".
>
> I can't find the function anywhere, not in my standard R install, or in any
> of the packages.
>
> Doe anyone have a reference for this?
>
> Thanks!
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Break in the y-axis

2010-06-12 Thread beloitstudent

Hello all, 

I have been having trouble getting a break in my y-axis.  All of my data
points are up around 100-200, but the graph has to start at zero, so i would
like to remove all the white space using a break symbol.  I have been able
to get the break and labels to be correct, however, I can't seem to get the
data to match the axis anymore.  I must be using the axis.break() in plotrix
incorrectly, but I cannot see where my issue is.  This is what I have so
far.

##
library(plotrix)

par(mar=c(6,8,4,4))
###Data
Saline <- structure(list(Time = c(-20L, 0, 30L, 45L, 60L, 80L,
110L,140L,200L, 260L, 320L), Average = 
c(119.250,118.750,117.500,132.75,151.875,159.75,142.75,160,168,167.125,143),SEM=c(2.211,2.569,2.665,5.435146394,6.208741369,8.363550657,8.51349469,14.30284687,15.93865792,16.76541326,13.796)),
.Names = c("Time (min)", "Arterial Plasma Glucose (µg/mL)", "SEM"), class =
"data.frame", row.names = c("1", "2","3", "4", "5", "6", "7", "8", "9",
"10", "11"))

Ex <- structure(list(Time = c(-20L, 0, 30L, 45L, 60L, 80L, 110L,140L,200L,
260L, 320L), Average =
c(117.500,117.625,117.375,134.5,166.25,173.5,164.25,162.5,160.375,150.25,139.875),SEM
=
c(1.484614978,1.748906364,1.761,5.613395058,9.642063459,9.493284415,8.220804866,8.967059901,11.91626825,11.27169111,10.92915498)),
.Names = c("Time (min)", "Arterial Plasma Glucose (µg/mL)", "SEM"), class =
"data.frame", row.names = c("1", "2", "3", "4", "5", "6", "7", "8", "9",
"10", "11"))

plotted data with error bars
plotCI(x=Saline [,1],y=Saline [,2], uiw=Saline [,3], err="y",
pt.bg=par("bg"),pch=19, cex=2.5 ,gap=0, sfrac=0.005,
xlim=c(-20,340),xaxp=c(-20,320,12), xlab="Time (min)",
ylim=c(0,200),yaxp=c(0,200,10), ylab="Arterial Plasma\nGlucose (µg/mL)",
las=1, axes=FALSE, font.lab=2.2,cex.lab=1.6)   

plotCI(x=Ex [,1],y=Ex [,2], uiw=Ex [,3], err="y",pt.bg="white",pch=21,
col="black",cex=2.5 ,gap=0, sfrac=0.005, xlim=c(-20,340),xaxp=c(-20,320,12),
xlab="Time (min)", ylim=c(0,200),  yaxp=c(0,200,10), ylab="Arterial
Plasma\nGlucose (µg/mL)", las=1, font.lab=2.2, axes=FALSE, add=TRUE,
cex.lab=1.9)   

##x-axis
axis(1, at=c("-20", "0", "30", "45", "60", "80", "110", "140", "200", "260",
"320"), lwd=2, font=2, pos=0,cex.axis=.9)

y-axis
axis(2, las=1, at=c(0,40,60,80,100,120, 140), labels=c("0", "100", "120",
"140", "160", "180", "200"), lwd=2, font=2, pos="-20", cex.axis=1.7)

#axis break
axis.break(2, 20, style="slash")


As you can see, my data does not fit my axis anymore.  Any help with this
problem would be fantastic.  Thanks!

beloitstudent





-- 
View this message in context: 
http://r.789695.n4.nabble.com/Break-in-the-y-axis-tp2253205p2253205.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] mob (party package) question

2010-06-12 Thread tudor

Achim - Thank you so much for your suggestions. 

Tudor
-- 
View this message in context: 
http://r.789695.n4.nabble.com/mob-party-package-question-tp2252500p2253141.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Rgui crashed on Windows XP Home

2010-06-12 Thread Jinsong Zhao

On 2010-6-12 5:58, Duncan Murdoch wrote:

Jinsong Zhao wrote:

Hi there,

I just installed R 2.11.1 on my PC, which runs a Windows XP Home.

The installation is successful, however, when I double click on the
R icon, I get the following error message:

R for Windows GUI front-end has encountered a problem and needs to
close. We are sorry for the inconvenience.


The error occurs in msvcrt.dll, a Microsoft dll. It happened after a
call from one of the R dlls, setting up the GUI.

I don't really know what to suggest to fix this, other than the usual
things: try running R with the --vanilla command line argument, try
shutting down everything else on your system, etc.

Duncan Murdoch


Thank you very much for your reply.

It seems a problem with the Regional and Language setting in my XP Home 
system. Now the problem solved.


Regards,
Jinsong

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scope and sapply

2010-06-12 Thread Gabor Grothendieck
On Sat, Jun 12, 2010 at 9:22 PM, Worik R  wrote:
> I am puzzled by the scope rules that apply with sapply.
>
> If I want to modify a vector with sapply I tried...
>
> N <- 10
> vec <- vector(mode="numeric", length=N)
> test <- function(i){
>  vec[i] <- i
> }
> sapply(1:N, test)
> vec
>
> but it not work.
>

If we modify a variable inside a function R makes a local copy of that
variable rather than modifying the original one.  Try this instead.
The first one uses <<- to tell it to search into parent environments
for vec and the second one tells it explicitly which environment to
use.

vec <- rep(0, 10)
test2 <- function(i) vec[i] <<- i
junk <- sapply(1:N, test2)
vec

vec <- rep(0, 10)
test3 <- function(i, env) env$vec[i] <- i
junk <- sapply(1:N, test3, env = environment())
vec

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] mob (party package) question

2010-06-12 Thread tudor bodea

Achim:

 

Thanks a lot for your suggestions.

 

Tudor


 
> Date: Sun, 13 Jun 2010 00:26:08 +0200
> From: achim.zeil...@uibk.ac.at
> To: tudor_bo...@hotmail.com
> CC: r-help@r-project.org
> Subject: Re: [R] mob (party package) question
> 
> On Fri, 11 Jun 2010, tudor wrote:
> 
> >
> > Dear useRs:
> >
> > I try to use mob from the party package (thanks Achim and Co.!) to model
> > based recursive partition a data set. The model is a logistic regression
> > specified with model=glinearModel and family=binomial(). Running mob
> > results in a few warnings of the type: In glm.fit ... algorithm did not
> > converge. As I speculate that this may be due to an insufficient number of
> > iterations I am wondering if any of you knows how to pass arguments to
> > glm.fit from within mob (e.g., epsilon and maxit). All my attempts to do it
> > by myself failed. All suggestions are welcome.
> 
> Hmm, good point, currently the "control" argument to glm.fit() can not be 
> passed through mob() because this has an argument fo the same nam. I'll 
> add this to our list of improvements that need to be done.
> 
> You can try to work around this by writing your own StatModel driver, 
> e.g., glinearModel2. However, before doing that, I would try to see 
> whether this is really the problem. I guess it's more likely that there 
> are other problems with that particular subset, e.g., (quasi-)complete 
> separation or no variation in one of the variables. Simply set "verbose = 
> TRUE" when calling mob() and track which subset causes the error. Then you 
> can recreate that subset by simply calling subset() and checking whether 
> glm() or glm.fit() work appropriately on that sub-sample.
> 
> hth,
> Z
> 
> > My system: Windows XP, R2.10.1.
> >
> > Thank you.
> >
> > Tudor
> > -- 
> > View this message in context: 
> > http://r.789695.n4.nabble.com/mob-party-package-question-tp2252500p2252500.html
> > Sent from the R help mailing list archive at Nabble.com.
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
  
_
Hotmail: Trusted email with Microsoft’s powerful SPAM protection.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Scope and sapply

2010-06-12 Thread Sarah Goslee
Hi,

On Sat, Jun 12, 2010 at 9:22 PM, Worik R  wrote:
> I am puzzled by the scope rules that apply with sapply.
>
> If I want to modify a vector with sapply I tried...
>
> N <- 10
> vec <- vector(mode="numeric", length=N)
> test <- function(i){
>  vec[i] <- i
> }
> sapply(1:N, test)
> vec

You didn't assign the results of sapply to anything, and you didn't
assign anything new to vec.

What did you expect to happen?

-- 
Sarah Goslee
http://www.functionaldiversity.org

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Scope and sapply

2010-06-12 Thread Worik R
I am puzzled by the scope rules that apply with sapply.

If I want to modify a vector with sapply I tried...

N <- 10
vec <- vector(mode="numeric", length=N)
test <- function(i){
  vec[i] <- i
}
sapply(1:N, test)
vec

but it not work.

How can this be done?

Worik

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] help with R

2010-06-12 Thread John Fox
Dear Hannah,

If I understand you correctly, you want a solution when rho is 0.5; if so,

> f <- function(x){
+   pnorm(-x)*pnorm((0.5*dnorm(x)/pnorm(x)-x)/sqrt(1-0.5^2)) - 0.05
+ }

> uniroot(f, c(-3, 3))

$root
[1] 0.8031289

$f.root
[1] -1.565857e-06

$iter
[1] 11

$estim.prec
[1] 6.103516e-05

I hope this helps,
 John


John Fox
Senator William McMaster 
  Professor of Social Statistics
Department of Sociology
McMaster University
Hamilton, Ontario, Canada
web: socserv.mcmaster.ca/jfox


> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
On
> Behalf Of li li
> Sent: June-12-10 8:31 PM
> To: r-help
> Subject: [R] help with R
> 
> Hi all,
>I want to solve the following equation for x with rho <- 0.5
> 
>pnorm(-x)*pnorm((rho*dnorm(x)/pnorm(x)-x)/sqrt(1-rho^2))==0.05
> 
>Is there a function in R to do this?
> 
>   Thank you very much!
>  Hannah
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Sim function

2010-06-12 Thread Noah Silverman
I'm reading Gellman's book "Data Analysis Using Regression and 
Multilevel-Hierarchical Models"


In Chapter 7 (and later), he makes frequent referent to a function names 
"sim".


I can't find the function anywhere, not in my standard R install, or in 
any of the packages.


Doe anyone have a reference for this?

Thanks!

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help with R

2010-06-12 Thread li li
Hi all,
   I want to solve the following equation for x with rho <- 0.5

   pnorm(-x)*pnorm((rho*dnorm(x)/pnorm(x)-x)/sqrt(1-rho^2))==0.05

   Is there a function in R to do this?

  Thank you very much!
 Hannah

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fast way to compute largest eigenvector

2010-06-12 Thread Ravi Varadhan
You can use the power method for computing the dominant eigenvector.  A more 
sophisticated approach (for large matrices) is the Lancsoz algorithm for 
Hermitian matrices, which is based on the power method.  The `arpack' function 
in the "igraph" package uses the more general Arnoldi iteration, which is the 
generailzation of Lancsoz algorithm for non-Hermitian matrices.

Ravi.



Ravi Varadhan, Ph.D.
Assistant Professor,
Division of Geriatric Medicine and Gerontology
School of Medicine
Johns Hopkins University

Ph. (410) 502-2619
email: rvarad...@jhmi.edu


- Original Message -
From: MInh Tang 
Date: Saturday, June 12, 2010 12:37 pm
Subject: [R] Fast way to compute largest eigenvector
To: r-h...@stat.math.ethz.ch


> Hello all,
>  
>  I was wondering if there is a function in R that only computes the 
> eigenvector 
>  corresponding to the largest/smallest eigenvalue of an arbitrary real 
> matrix. 
>  
>  Thanks
>  Minh
>  
>  -- 
>  Living on Earth may be expensive, but it includes an annual free trip
>  around the Sun.
>  
>  __
>  R-help@r-project.org mailing list
>  
>  PLEASE do read the posting guide 
>  and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Can one get a list of recommended packages?

2010-06-12 Thread Dr. David Kirkby

On 06/12/10 05:27 PM, Douglas Bates wrote:

On Sat, Jun 12, 2010 at 8:37 AM, Dr. David Kirkby
  wrote:

R 2.10.1 is used in the Sage maths project. Several recommended packages
(Matrix, class, mgcv, nnet, rpart, spatial, and survival) are failing to
build on Solaris 10 (SPARC).


Have you checked the dependencies for those packages?  Some require GNU make.


We used GNU make.


We would like to be able to get a list of the recommended packages for R
2.10.1, but ideally via a call to R, so it is not necessary to update that
list every time a new version of R is released. We do not want to access the
Internet to get this information.



Is there a way in R to list the recommended packages?


I'm not sure I understand the logic of this.  If you are going to
build R then presumably you have the tar.gz file which contains the
sources for the recommended packages in the subdirectory
src/library/Recommended/. Why not get the list from there?


The reason is when the version of R gets updated in Sage, then someone will have 
to check that list again, and more than likely fail to do so, with the result 
tests will fail since packages do not exist, or worst still we will be unaware 
they have failed to build properly.


Therefore, being able to get them from a command would be useful, but can 
understand if that is not possible.



$ cd ~/src/R-devel/src/library/Recommended/
$ ls *.tgz
boot.tgz codetools.tgz   lattice.tgz  mgcv.tgz  rpart.tgz
class.tgzforeign.tgz MASS.tgz nlme.tgz  spatial.tgz
cluster.tgz  KernSmooth.tgz  Matrix.tgz   nnet.tgz  survival.tgz


OK, thank you for that list.


Better still, is there a way to list the recommended packages which have not
been installed, so getting a list of any failures?


Again, this seems to be a rather convoluted approach.  Why not check
why the packages don't install properly?


R had built, and the failure of the packages to build was not very obvious, 
since it did not cause make to exit with a non-zero exit code. Nobody had 
noticed until very recently that there was a problem.


Therefore I proposed to make a test of the packages that should have been 
installed, and ensure they actually all had.


You need to be aware that R is just one part of Sage. Building the whole of Sage 
takes a long time (>24 hours on some computers) so needless to say, people will 
not view every line of error messages. The fact that 'make' succeeded left us a 
false sense of security, when later it was realsed there were problems when R 
run its self-tests.


Dave

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] mob (party package) question

2010-06-12 Thread Achim Zeileis

On Fri, 11 Jun 2010, tudor wrote:



Dear useRs:

I try to use mob from the party package (thanks Achim and Co.!) to model
based recursive partition a data set.  The model is a logistic regression
specified with model=glinearModel and family=binomial().  Running mob
results in a few warnings of the type: In glm.fit ... algorithm did not
converge.  As I speculate that this may be due to an insufficient number of
iterations I am wondering if any of you knows how to pass arguments to
glm.fit from within mob (e.g., epsilon and maxit).  All my attempts to do it
by myself failed.  All suggestions are welcome.


Hmm, good point, currently the "control" argument to glm.fit() can not be 
passed through mob() because this has an argument fo the same nam. I'll 
add this to our list of improvements that need to be done.


You can try to work around this by writing your own StatModel driver, 
e.g., glinearModel2. However, before doing that, I would try to see 
whether this is really the problem. I guess it's more likely that there 
are other problems with that particular subset, e.g., (quasi-)complete 
separation or no variation in one of the variables. Simply set "verbose = 
TRUE" when calling mob() and track which subset causes the error. Then you 
can recreate that subset by simply calling subset() and checking whether 
glm() or glm.fit() work appropriately on that sub-sample.


hth,
Z


My system: Windows XP, R2.10.1.

Thank you.

Tudor
--
View this message in context: 
http://r.789695.n4.nabble.com/mob-party-package-question-tp2252500p2252500.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Handling character string

2010-06-12 Thread jim holtman
^[[:space:]]*  indicates that you want to match as many (zero or more)
spaces at the beginning of the string.  The '^' anchors the match at the
beginning.  [[:space:]]*$ says to match as many (zero or more that is what
"*" asks for) at the end of the string; "$" anchors the search at the end of
the string.

On Sat, Jun 12, 2010 at 2:03 PM, Megh Dal  wrote:

>   Thanks Jim for this reply. This is the way what I was looking for.
> However would you please explain me the meaning of ^[[:space:]]*"
> or '[[:space:]]+$'? When should I use "^" or "*" or "+$"?
>
> Thanks for your time.
>
> --- On *Sat, 6/12/10, jim holtman * wrote:
>
>
> From: jim holtman 
>
> Subject: Re: [R] Handling character string
> To: "Megh Dal" 
> Cc: "Erik Iverson" , r-h...@stat.math.ethz.ch
> Date: Saturday, June 12, 2010, 10:18 PM
>
>
> This is probably what you want:
>
> > sub("^[[:space:]]*", "",  "   Now is the time")
> [1] "Now is the time"
> >
>
> You need to anchor it at the beginning with '^'
>
> On Sat, Jun 12, 2010 at 10:29 AM, Megh Dal 
> http://mc/compose?to=megh700...@yahoo.com>>
> wrote:
> > Thanks Erik for you reply. You have pointed correctly I want to remove
> the "space" at the 1st place (if any). In the mean time I have looked into
> the function sub() and there seems to be one example that mimics my problem
> :
> >> str <- '   Now is the time  '> sub('[[:space:]]+$', '', str)[1] "
> Now is the time"
> >
> > However it removes the space if it is at the last position. I have tried
> with different combinations like "sub('[[:space:]]-$', '', str)",
> "sub('$+[[:space:]]+$', '', str)" etc, none is working if space is at the
> 1st position.
> > What would be the correct approach?
> > Thanks,
> > --- On Sat, 6/12/10, Erik Iverson 
> > http://mc/compose?to=er...@ccbr.umn.edu>>
> wrote:
> >
> > From: Erik Iverson 
> > http://mc/compose?to=er...@ccbr.umn.edu>
> >
> > Subject: Re: [R] Handling character string
> > To: "Megh Dal" 
> > http://mc/compose?to=megh700...@yahoo.com>
> >
> > Cc: r-h...@stat.math.ethz.ch
> > Date: Saturday, June 12, 2010, 2:36 AM
> >
> >
> >
> > Megh Dal wrote:
> >> Dear all, Is there any R function to say these 2 character strings
> >> "temp"  and " temp" are actually same? If I type following code R
> >> says there are indeed different :
> >>> "temp"  == " temp"[1] FALSE
> >
> > You don't say how you're defining "same", but it definitely requires more
> explanation, since they are not the same.  Why should those two strings be
> the same in your mind?  Do you want to remove leading white space, all white
> space, just one space, etc?
> >
> > You might find the examples in ?sub useful.
> >
> >
> >
> >
> >
> >
> >[[alternative HTML version deleted]]
> >
> >
> > __
> > R-help@r-project.org  mailing
> list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
>
>
>
> --
> Jim Holtman
> Cincinnati, OH
> +1 513 646 9390
>
> What is the problem that you are trying to solve?
>
>
>


-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Configuration of host address for database access

2010-06-12 Thread julia . jacobson
Hello everybody,
Whatever I use as the host address of my PostgreSQL database server, I always 
get a connection if I'm accessing the database from R on the same machine and 
none if I'm trying to do so from a remote client.
PostgreSQL is running on a windows xp machine as the RDBMS, JDBC, R, DBI 
package, RODBC package, rJava package and RpgSQL package are installed locally 
and remotely. Accessing the database from the remote machine without using R 
but the PostgreSQL command line client works fine.
The following commands are used on the R prompt:
> library(RpgSQL)
> con <- dbConnect(pgSQL(), host="server.domain.org", port=5432, user="me", 
> dbname="my_db", passowrd="secret")
Selecting data is performed as a test for successful connection:
> dbGetQuery(con, "SELECT * FROM my_table")
For me it seems like whatever value "host" has, it is simply ignored. Has 
anyone of you got an idea what might be the reason for that?
Thanks in advance,
Julia

-- 
WM 2010: Top News, Spielpläne, Public Viewing-Termine, E-Cards und alles, 
was der Fan sonst noch braucht, gibt´s im Sport-Channel auf arcor.de.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Clustering algorithms don't find obvious clusters

2010-06-12 Thread Dave Roberts

Henrik,

Given your initial matrix, that should tell you which authors are 
similar/dissimilar to which other authors in terms of which authors they 
cite.  In this case authors 1 and 3 are most similar because they both 
cite authors 2 and 4.  Authors 2 and 3 are most different because they 
both cite 6 authors but none of the same authors 
(sqrt(6^2+5^2+1^2)=7.87).  1 and 2 are next most different because 1 
only cites 5 authors but shares none with 2 (sqrt(6^2+4^2+1^2)=7.28) etc.


If you want to know which authors are similar in terms of who gas 
cited them, simply transpose the matrix


daisy(t(M))

I'm guessing none of this is actually what you are looking for 
however, and Etienne's graph theoretic approach may be more what you 
have in mind.


Dave

David W. Roberts office 406-994-4548
Department of Ecology email drobe...@montana.edu
Montana State University
Bozeman, MT 59717-3460


Henrik Aldberg wrote:

Dave,

I used daisy with the default settings (daisy(M) where M is the matrix).


Henrik

On 11 June 2010 21:57, Dave Roberts > wrote:


Henrik,

   The clustering algorithms you refer to (and almost all others)
expect the matrix to be symmetric.  They do not seek a
graph-theoretic solution, but rather proximity in geometric or
topological space.

   How did you convert y9oru matrix to a dissimilarity?

Dave Roberts

Henrik Aldberg wrote:

I have a directed graph which is represented as a matrix on the form


0 4 0 1

6 0 0 0

0 1 0 5

0 0 4 0


Each row correspond to an author (A, B, C, D) and the values
says how many
times this author have cited the other authors. Hence the first
row says
that author A have cited author B four times and author D one
time. Thus the
matrix represents two groups of authors: (A,B) and (C,D) who
cites each
other. But there is also a weak link between the groups. In
reality this
matrix is much bigger and very sparce but it still consists of
distinct
groups of authors.


My problem is that when I cluster the matrix using pam, clara or
agnes the
algorithms does not find the obvious clusters. I have tried to
turn it into
a dissimilarity matrix before clustering but that did not help
either.


The layout of the clustering is not that important to me, my primary
interest is the to get the right nodes into the right clusters.



Sincerely


Henrik

   [[alternative HTML version deleted]]

__
R-help@r-project.org  mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


-




__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] calling a function with new inputs every 1 minute

2010-06-12 Thread KstuS

I have inputs to a function which are changing all the time - I pull these
values from the internet.  I then apply a function to the values.  What I'd
like to do is automate the process so it runs every one minute and adds the
output of the function as a new element of a vector.  Pseudo code:

at Start time:
input1_t0, input2_t0, input3_t0
function(input1_t0, input2_t0, input3_t0)
function_result_0

at Time T2 (say one minute later)
input1_t1, input2_t1, input3_t1
function(input1_t1, input2_t1, input3_t1
function_result_1
...

end_result <- c(function_result_0, function_result_1, function_result_2,
..., n)

Ideally I'd want to do this every 1 minute for the next 500 minutes.

   
-- 
View this message in context: 
http://r.789695.n4.nabble.com/calling-a-function-with-new-inputs-every-1-minute-tp2252954p2252954.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Clustering algorithms don't find obvious clusters

2010-06-12 Thread Henrik Aldberg
Dave,

I used daisy with the default settings (daisy(M) where M is the matrix).


Henrik

On 11 June 2010 21:57, Dave Roberts  wrote:

> Henrik,
>
>The clustering algorithms you refer to (and almost all others) expect
> the matrix to be symmetric.  They do not seek a graph-theoretic solution,
> but rather proximity in geometric or topological space.
>
>How did you convert y9oru matrix to a dissimilarity?
>
> Dave Roberts
>
> Henrik Aldberg wrote:
>
>> I have a directed graph which is represented as a matrix on the form
>>
>>
>> 0 4 0 1
>>
>> 6 0 0 0
>>
>> 0 1 0 5
>>
>> 0 0 4 0
>>
>>
>> Each row correspond to an author (A, B, C, D) and the values says how many
>> times this author have cited the other authors. Hence the first row says
>> that author A have cited author B four times and author D one time. Thus
>> the
>> matrix represents two groups of authors: (A,B) and (C,D) who cites each
>> other. But there is also a weak link between the groups. In reality this
>> matrix is much bigger and very sparce but it still consists of distinct
>> groups of authors.
>>
>>
>> My problem is that when I cluster the matrix using pam, clara or agnes the
>> algorithms does not find the obvious clusters. I have tried to turn it
>> into
>> a dissimilarity matrix before clustering but that did not help either.
>>
>>
>> The layout of the clustering is not that important to me, my primary
>> interest is the to get the right nodes into the right clusters.
>>
>>
>>
>> Sincerely
>>
>>
>> Henrik
>>
>>[[alternative HTML version deleted]]
>>
>> __
>> R-help@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
> -
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Handling character string

2010-06-12 Thread Megh Dal
Thanks Jim for this reply. This is the way what I was looking for. However 
would you please explain me the meaning of ^[[:space:]]*" or '[[:space:]]+$'? 
When should I use "^" or "*" or "+$"?
Thanks for your time.

--- On Sat, 6/12/10, jim holtman  wrote:

From: jim holtman 
Subject: Re: [R] Handling character string
To: "Megh Dal" 
Cc: "Erik Iverson" , r-h...@stat.math.ethz.ch
Date: Saturday, June 12, 2010, 10:18 PM

This is probably what you want:

> sub("^[[:space:]]*", "",  "   Now is the time")
[1] "Now is the time"
>

You need to anchor it at the beginning with '^'

On Sat, Jun 12, 2010 at 10:29 AM, Megh Dal  wrote:
> Thanks Erik for you reply. You have pointed correctly I want to remove the 
> "space" at the 1st place (if any). In the mean time I have looked into the 
> function sub() and there seems to be one example that mimics my problem :
>> str <- '   Now is the time      '> sub('[[:space:]]+$', '', str)[1] "   Now 
>> is the time"
>
> However it removes the space if it is at the last position. I have tried with 
> different combinations like "sub('[[:space:]]-$', '', str)", 
> "sub('$+[[:space:]]+$', '', str)" etc, none is working if space is at the 1st 
> position.
> What would be the correct approach?
> Thanks,
> --- On Sat, 6/12/10, Erik Iverson  wrote:
>
> From: Erik Iverson 
> Subject: Re: [R] Handling character string
> To: "Megh Dal" 
> Cc: r-h...@stat.math.ethz.ch
> Date: Saturday, June 12, 2010, 2:36 AM
>
>
>
> Megh Dal wrote:
>> Dear all, Is there any R function to say these 2 character strings
>> "temp"  and " temp" are actually same? If I type following code R
>> says there are indeed different :
>>> "temp"  == " temp"[1] FALSE
>
> You don't say how you're defining "same", but it definitely requires more 
> explanation, since they are not the same.  Why should those two strings be 
> the same in your mind?  Do you want to remove leading white space, all white 
> space, just one space, etc?
>
> You might find the examples in ?sub useful.
>
>
>
>
>
>
>        [[alternative HTML version deleted]]
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] who know how to program Bartlett lewis model using R

2010-06-12 Thread Huda Ibrahim
I am a graduate student working on disaggregation rainfall using Bartlett Lewis 
model but i have a problem on how can I make a program to estimate the model 
parameters and using it for the simulation. who can help of those people who 
are famous in R.
 
thanks for all in advance


  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Compiling R with multi-threaded BLAS math libraries - why not actually ?

2010-06-12 Thread Barry Rowlingson
On Sat, Jun 12, 2010 at 3:16 PM, Tal Galili  wrote:
> Hello Douglas,
>
> Thank you for the BLAST!=BLAS correction (I imagine my slip was due to some
> working I have done recently with an RNA analysis software called BLAST).
>
> Also, thank you for the very interesting posting here and in your reply to
> David's post.
>
> My current conclusion from this thread are that:
> 1) This should be interesting ONLY if I will be working on large matrices
> and doing "very specific
> kinds of operations". (I imagine David's examples on his post demonstrate
> those)
> 2) In case I would like to do it, I will need to go follow the actions
> detailed here (thank you for the pointer):
> http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-use-a-fast-BLAS_003f
> And more or less pray that my computer specification are relevant.
> (Although
> I do wonder how does REvolution distribution succeeds in doing this without
> making the user do any more steps then just installing R)
>

 Another option if you discover that your algorithm benefits from
massive parallelism is computing on the GPU. There are BLAS
implementations, for examples:

http://gpgpu.org/index.php?s=BLAS&searchbutton=Search

 but integration with R is another issue (and I have a vague memory it
was covered in threads on R-dev recently, which is probably where this
thread should be anyway).

Barry

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Handling character string

2010-06-12 Thread jim holtman
This is probably what you want:

> sub("^[[:space:]]*", "",  "   Now is the time")
[1] "Now is the time"
>

You need to anchor it at the beginning with '^'

On Sat, Jun 12, 2010 at 10:29 AM, Megh Dal  wrote:
> Thanks Erik for you reply. You have pointed correctly I want to remove the 
> "space" at the 1st place (if any). In the mean time I have looked into the 
> function sub() and there seems to be one example that mimics my problem :
>> str <- '   Now is the time      '> sub('[[:space:]]+$', '', str)[1] "   Now 
>> is the time"
>
> However it removes the space if it is at the last position. I have tried with 
> different combinations like "sub('[[:space:]]-$', '', str)", 
> "sub('$+[[:space:]]+$', '', str)" etc, none is working if space is at the 1st 
> position.
> What would be the correct approach?
> Thanks,
> --- On Sat, 6/12/10, Erik Iverson  wrote:
>
> From: Erik Iverson 
> Subject: Re: [R] Handling character string
> To: "Megh Dal" 
> Cc: r-h...@stat.math.ethz.ch
> Date: Saturday, June 12, 2010, 2:36 AM
>
>
>
> Megh Dal wrote:
>> Dear all, Is there any R function to say these 2 character strings
>> "temp"  and " temp" are actually same? If I type following code R
>> says there are indeed different :
>>> "temp"  == " temp"[1] FALSE
>
> You don't say how you're defining "same", but it definitely requires more 
> explanation, since they are not the same.  Why should those two strings be 
> the same in your mind?  Do you want to remove leading white space, all white 
> space, just one space, etc?
>
> You might find the examples in ?sub useful.
>
>
>
>
>
>
>        [[alternative HTML version deleted]]
>
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>



-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] generating ordered, random decimal fractions

2010-06-12 Thread Peter Ehlers

I must be missing something; can't you just use, e.g., one of:

 sort(round(runif(10), 3))

 sort(sample(1000, 10, TRUE)/1000)

?

  -Peter Ehlers


On 2010-06-12 7:49, kurt_h...@nps.gov wrote:

Greetings
How do I do this in R?  Checking the Cran site produces a bewildering array
of packages that I can't seem to find to load.  Surely the main program has
this function?
Cheers
Kurt

***
Kurt Lewis Helf, Ph.D.
Invertebrate Ecologist
National Park Service
Cumberland Piedmont Network
P.O. Box 8
Mammoth Cave, KY 42259
Ph: 270-758-2163
Lab: 270-758-2151
Fax: 270-758-2609

Science, in constantly seeking real explanations, reveals the true majesty
of our world in all its complexity.
-Richard Dawkins

The scientific tradition is distinguished from the pre-scientific tradition
in having two layers.  Like the latter it passes on its theories but it
also passes on a critical attitude towards them.  The theories are passed
on not as dogmas but rather with the challenge to discuss them and improve
upon them.
-Karl Popper

...consider yourself a guest in the home of other creatures as significant
as yourself.
-Wayside at Wilderness Threshold in McKittrick Canyon, Guadalupe Mountains
National Park, TX

Cumberland Piedmont Network Forest Pest Monitoring Website
http://science.nature.nps.gov/im/units/cupn/monitor/forestpest/forest_pests.cfm


Cumberland Piedmont Network Cave Cricket Monitoring Website:
http://science.nature.nps.gov/im/units/cupn/monitor/cavecrickets/cavecrickets.cfm



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Transforming simulation data which is spread acrossmanyfiles into a barplot

2010-06-12 Thread Ian Bentley
Thanks Gabor - I was able to use that for my purposes.

On 11 June 2010 16:27, Bert Gunter  wrote:

> So two time series? Fair enough. But less is more. Plot them as separates
> series of points connected by lines, different colors for the two different
> series. Or as two trellises plots. You may also wish to overlay a smooth to
> help the reader see the "trend"(e.g via a loess or other nonparametric
> smooth, or perhaps just a fitted line).
>
> The only part of a bar that conveys information is the top. The rest of the
> fill is "chartjunk" (Tufte's term) and distracts.
>
>
> I'll keep this in mind.  I am just using this chart for my own analysis
now, and probably won't include it later.


> Bert Gunter
> Genentech Nonclinical Biostatistics
>
>
>
> -Original Message-
> From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> On
> Behalf Of Ian Bentley
> Sent: Friday, June 11, 2010 12:15 PM
> To: Bert Gunter
> Cc: r-help@r-project.org; Hadley Wickham
> Subject: Re: [R] Transforming simulation data which is spread
> acrossmanyfiles into a barplot
>
> I'm not trying to see the relation between sent and received, but rather to
> show how these grow across the increasing complexity of the 50 data points.
>
> On 11 June 2010 15:02, Bert Gunter  wrote:
>
> > Ouch! Lousy plot. Instead, plot the  50 (mean sent, mean received)pairs
> as
> > a
> > y vs x scatterplot to see the relationship.
> >
> > Bert Gunter
> > Genentech Nonclinical Biostatistics
> >
> >
> >
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org]
> > On
> > Behalf Of Hadley Wickham
> > Sent: Friday, June 11, 2010 11:53 AM
> > To: Ian Bentley
> > Cc: r-help@r-project.org
> > Subject: Re: [R] Transforming simulation data which is spread across
> > manyfiles into a barplot
> >
> > On Fri, Jun 11, 2010 at 1:32 PM, Ian Bentley 
> > wrote:
> > > I'm an R newbie, and I'm just trying to use some of it's graphing
> > > capabilities, but I'm a bit stuck - basically in massaging the already
> > > available data into a format R likes.
> > >
> > > I have a simulation environment which produces logs, which represent a
> > > number of different things.  I then run a python script on this data,
> and
> > > putting it in a nicer format.  Essentially, the python script reduces
> the
> > > number of files by two orders of magnitude.
> > >
> > > What I'm left with, is a number of files, which each have two columns
> of
> > > data in them.
> > > The files look something like this:
> > > --1000.log--
> > > Sent Received
> > > 405.0 3832.0
> > > 176.0 1742.0
> > > 176.0 1766.0
> > > 176.0 1240.0
> > > 356.0 3396.0
> > > ...
> > >
> > > This file - called 1000.log - represents a data point at 1000. What I'd
> > like
> > > to do is to use a loop, to read in 50 or so of these files, and then
> > produce
> > > a stacked barplot.  Ideally, the stacked barplot would have 1 bar per
> > file,
> > > and two stacks per bar.  The first stack would be the mean of the sent,
> > and
> > > the second would be the mean of the received.
> > >
> > > I've used a loop to read files in R before, something like this ---
> > >
> > > for (i in 1:50){
> > >tmpFile <- paste(base, i*100, ".log", sep="")
> > >tmp <- read.table(tmpFile)
> > > }
> > >
> >
> > # Load data
> > library(plyr)
> >
> > paths <- dir(base, pattern = "\\.log", full = TRUE)
> > names(paths) <- basename(paths)
> >
> > df <- ddply(paths, read.table)
> >
> > # Compute averages:
> > avg <- ddply(df, ".id", summarise,
> >  sent = mean(sent),
> >  received = mean(received)
> >
> > You can read more about plyr at http://had.co.nz/plyr.
> >
> > Hadley
> >
> > --
> > Assistant Professor / Dobelman Family Junior Chair
> > Department of Statistics / Rice University
> > http://had.co.nz/
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> > http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
> >
> >
>
>
> --
> Ian Bentley
> M.Sc. Candidate
> Queen's University
> Kingston, Ontario
>
> [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
>


-- 
Ian Bentley
M.Sc. Candidate
Queen's University
Kingston, Ontario

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fast way to compute largest eigenvector

2010-06-12 Thread MInh Tang
Hello all,

I was wondering if there is a function in R that only computes the eigenvector 
corresponding to the largest/smallest eigenvalue of an arbitrary real matrix. 

Thanks
Minh

-- 
Living on Earth may be expensive, but it includes an annual free trip
around the Sun.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Can one get a list of recommended packages?

2010-06-12 Thread Dr. David Kirkby

On 06/12/10 03:31 PM, Tal Galili wrote:

Hello David,

I am not sure I understood your question.


Sorry, perhaps I should have rephrased it better.


Are you asking what are the packages that the R release comes with?


Sort of. When R is configured, there is an option

  --with-recommended-packages
  use/install recommended R packages [yes]
which defaults to yes. So I assume that installs some recommended, but not 
essential packages.


We are building R in Sage with no options, so various non-essential packages are 
building because that is the default, though some (Matrix being one of them), is 
not building on Solaris.


So when R is tested a failure occurs.

The build of R apppears to succeed, but a check shows some problems - see here


http://sage.math.washington.edu/home/mpatel/trac/8306/r-2.10.1.p2.log

What I'd like to find is a list of packages (like Matrix) which would be 
installed with a default installation of R, but are missing from my installation.


We would like something that can quickly check if it is built or not - we don't 
wish to run an extensive time-consuming test suite.



Or are you asking what recommended packages one should have when installing
R?  (There is a good list to start with
here
)


No,



Also, are you asking how to not need to install new packages when upgrading
R?


No


(For that, you can have a look at a post I wrote on an alternative way for
upgrading R on 
windows,
which might give relevant ideas for your case as well)


Best,
Tal


Thank you.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problems with BRugs

2010-06-12 Thread Uwe Ligges
With BRugsFit() of the current verison of BRugs, the not given inits are 
not generated automatically (which may change with the next version).


Hence you need to do it manually as in:


library("BRugs")
modelCheck("test.bug")
modelData(bugsData(Data))
modelCompile(numChains=1)
modelInits(bugsInits(Inits))
modelGenInits()
modelUpdate(1000)
samplesSet(Parameters)
modelUpdate(1)
p1.sim <- samplesStats("*")


Best,
Uwe Ligges




On 10.06.2010 17:54, R Heberto Ghezzo, Dr wrote:

Hello, I am trying to run some examples from the book of P.Congdon. If I run 
the following script
# Program 7.2 Bayesian Statistical Modelling - Peter Congdon
#
library(R2WinBUGS)
setwd("c:/temp/R")
mo<- function() {
   rho ~ dbeta(1,1)
   th ~ dgamma(0.001,0.001)
   Y[1] ~ dpois(th)
   for (t in 2:14) {Y[t] ~ dpois(mu[t])
   for (k in 1:Y[t-1]+1) {B[k,t] ~ dbern(rho)}
   B.s[t]<- sum(B[1:Y[t-1]+1,t])-B[1,t]
   mu[t]<- B.s[t] +th*(1-rho)}
}
write.model(mo,con="test.bug")
Data<-
list(Y=c(0,1,2,3,1,4,9,18,23,31,20,25,37,45))
Inits<- function() {
   list(rho=0.8,th=5)
}
Parameters<- c("rho","mu")
#
p1.sim<- bugs(model.file="test.bug",
Data,
Inits,
n.chains=1,
Parameters,
n.burnin = 1000,
n.iter = 1,
n.thin=2,
program="WinBUGS",
bugs.directory=Sys.getenv("DirWinBUGS")
   )
#
and this works OK given answers similar to the book
But changing library to BRugs, write.model to witeModel and the call to

#
p1.sim<- BRugsFit("test.bug",

+Data,
+Inits,
+numChains=1,
+Parameters,
+nBurnin = 1000,
+nIter = 1,
+nThin=2
+   )
I get :

model is syntactically correct
data loaded
model compiled
[1] "C:\\Users\\User\\AppData\\Local\\Temp\\RtmphLlekC/inits1.txt"
Initializing chain 1: initial values loaded but this or another chain contain 
uninitialized variables
model must be initialized before updating
can not calculate deviance for this model
Error in samplesSet(parametersToSave) :
   model must be initialized before monitors used

#

I tried several forms of the Inits and it does not work
Can somebody tell me where is the mistake I am making?
Thanks for any help
Heberto Ghezzo
Montreal
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Can one get a list of recommended packages?

2010-06-12 Thread Douglas Bates
On Sat, Jun 12, 2010 at 8:37 AM, Dr. David Kirkby
 wrote:
> R 2.10.1 is used in the Sage maths project. Several recommended packages
> (Matrix, class, mgcv, nnet, rpart, spatial, and survival) are failing to
> build on Solaris 10 (SPARC).

Have you checked the dependencies for those packages?  Some require GNU make.

> We would like to be able to get a list of the recommended packages for R
> 2.10.1, but ideally via a call to R, so it is not necessary to update that
> list every time a new version of R is released. We do not want to access the
> Internet to get this information.

> Is there a way in R to list the recommended packages?

I'm not sure I understand the logic of this.  If you are going to
build R then presumably you have the tar.gz file which contains the
sources for the recommended packages in the subdirectory
src/library/Recommended/. Why not get the list from there?

$ cd ~/src/R-devel/src/library/Recommended/
$ ls *.tgz
boot.tgz codetools.tgz   lattice.tgz  mgcv.tgz  rpart.tgz
class.tgzforeign.tgz MASS.tgz nlme.tgz  spatial.tgz
cluster.tgz  KernSmooth.tgz  Matrix.tgz   nnet.tgz  survival.tgz

> Better still, is there a way to list the recommended packages which have not
> been installed, so getting a list of any failures?

Again, this seems to be a rather convoluted approach.  Why not check
why the packages don't install properly?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logic with regexps

2010-06-12 Thread Ted Harding
Thanks, Gabor, for the initiation to perly regexps! I've only
been used to extended ones till now. A pity, perhaps, that
"perl=TRUE" is not an option for the likes of browseEnv(),
help.search(), list.files() and ls() (which take extended regexps),
but one can always assign the output and then grep(...,perl=TRUE)
on that, as you illustrate.

It would seem that these "look-ahead" features cold allow quite
complex logical conditions to be built up (though with increasing
unreadability and head-scratching)!

Best wishes,
Ted.

On 12-Jun-10 14:10:13, Gabor Grothendieck wrote:
> On Sat, Jun 12, 2010 at 5:38 AM, Ted Harding
>  wrote:
>> Greetings,
>> The following question has come up in an off-list discussion.
>> Is it possible to construct a regular expression 'rex' out of
>> two given regular expressions 'rex1' and 'rex2', such that a
>> character string X matches 'rex' if and only if X matches 'rex1'
>> AND X does not match 'rex2'?
>>
>> The desired end result can be achieved by logically combining
>> the results of a grep using 'rex1' with the results of a grep
>> on 'rex2', illustrated by the following example:
>>
>> ## Given character vector X (below), and two regular exdpressions
>> ## rex1="abc", rex2="ijk", to return the elements of X which match
>> ## rex1 AND do not match rex1:
>> X <- c(
>> _"abcdefg", _ _ _ # Yes
>> _"abchijk", _ _ _ # No
>> _"mnopqrs", _ _ _ # No
>> _"ijkpqrs", _ _ _ # No
>> _"abcpqrs" ) _ _ _# Yes
>> rex1 <- "abc"
>> rex2 <- "ijk"
>> ix1<- grep(rex1,X)
>> ix2<- grep(rex2,X)
>> X[ix1[!(ix1 %in% ix2)]]
>> ## [1] "abcdefg" "abcpqrs"
>>
>> Question: is there a way to construct 'rex' from 'rex1' and 'rex2'
>> such that
>>
>> _X[grep(rex,X)]
>>
>> would given the same result?
> 
> Try this:
> 
>rex <- "^(?!(.*ijk)).*abc"
>grep(rex, X, perl = TRUE)
> 
> Also note that X[grep(rex, X, perl = TRUE)] can be written:
> 
>grep(rex, X, perl = TRUE, value = TRUE)
> 
> See ?regex for more info.  Further regular expression links can be
> found in the External Links box on the gsubfn home page at
> http://gsubfn.googlecode.com
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


E-Mail: (Ted Harding) 
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jun-10   Time: 16:46:56
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Handling character string

2010-06-12 Thread Megh Dal
Thanks Erik for you reply. You have pointed correctly I want to remove the 
"space" at the 1st place (if any). In the mean time I have looked into the 
function sub() and there seems to be one example that mimics my problem :
> str <- '   Now is the time      '> sub('[[:space:]]+$', '', str)[1] "   Now 
> is the time"

However it removes the space if it is at the last position. I have tried with 
different combinations like "sub('[[:space:]]-$', '', str)", 
"sub('$+[[:space:]]+$', '', str)" etc, none is working if space is at the 1st 
position.
What would be the correct approach?
Thanks,
--- On Sat, 6/12/10, Erik Iverson  wrote:

From: Erik Iverson 
Subject: Re: [R] Handling character string
To: "Megh Dal" 
Cc: r-h...@stat.math.ethz.ch
Date: Saturday, June 12, 2010, 2:36 AM



Megh Dal wrote:
> Dear all, Is there any R function to say these 2 character strings
> "temp"  and " temp" are actually same? If I type following code R
> says there are indeed different :
>> "temp"  == " temp"[1] FALSE

You don't say how you're defining "same", but it definitely requires more 
explanation, since they are not the same.  Why should those two strings be the 
same in your mind?  Do you want to remove leading white space, all white space, 
just one space, etc?

You might find the examples in ?sub useful.





  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Displaying "homogeneous groups" in aov post-hoc results ?

2010-06-12 Thread Tal Galili
Thank you very much Hadley, exactly what I was looking for.

Tal

Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Sat, Jun 12, 2010 at 5:47 PM, Hadley Wickham  wrote:

> multcompView

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nonparametric density and probability methods

2010-06-12 Thread Steve Friedman
Hello,

I tried to post this earlier, but it seems that it did not appear on the
list. If you've rec'd 2 m

I'm trying to calculate non-parametric probabilities using the np package
and having some difficulties.
OS is Windows, R version 2.11.1

Here is what I've done so far.

library(np)

veg <- data.frame(factor(Physiogomy), meanAnnualDepthAve, TP)

attach(veg) : for clarification dim(veg) returns 1292 3

fy.x <- npcdens(veg$factor.Physiogomy ~ veg$meanAnnualDepthAve, nmulti=1)
#  this works, but I haven't found any information explaining what the
nmulti=1 term is doing?  Does this set the number of levels in the factor?
My data actually has 8 types, can I develop this to treat each one in a
single function ?

veg.eval <- data.frame(Physiogomy = factor('Marl"}, meanAnnualDepthAve =
seq(min(meanAnnualDepthAve), max(meanAnnualDepthAve))
#  This also works, however where does the 4755 records originate

str(veg.eval)
' data.frame':   4755 obs of 2 variables
 $  Physiogomy: factor w / 1 level  "Marl" :   1   1
1  1  
 $ meanAnnualDepthAve : num   -592, -591 - 590  - 578

because the data frame veg only contains 1292 records the is a mismatch
between the 4755 records.  Why are so many records produced in the veg.eval
statement and how can i constrain it to be consistent with the dimensions of
veg ?

plot(x, y, type = "l", lty="2", col='red' , xlab = "Mean Annual Depth",
ylab="Estimated Prob of Marl")
 lines(veg.eval$
meanAnnualDepthAve, predict(fy.x, newdata=veg.eval), col='blue')

I'm following an example I found Here:
http://en.wikipedia.org/wiki/Density_estimation

Your help is greatly appreciated.

Thanks
Steve

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Displaying "homogeneous groups" in aov post-hoc results ?

2010-06-12 Thread Hadley Wickham
Try multcompView

Hadley

On Sat, Jun 12, 2010 at 8:42 AM, Tal Galili  wrote:
> Hello dear R-help mailing list,
>
> A friend of mine teaches a regression and experimental design course and
> asked me the following question.
>
> She is trying to find a way to display the "homogeneous groups" (after
> performing tukey test on an aov object).
>
> here's an example for what she means by "homogeneous groups":
> She did one way anova and got these results for tukey test:
>> TukeyHSD(hci_anova)
>  Tukey multiple comparisons of means
>    95% family-wise confidence level
> Fit: aov(formula = time ~ interface, data = hci)
> $interface
>         diff          lwr         upr       p adj
> B-A  -23.75   -73.732836    26.23284   0.4165355
> C-A    31.25   -18.732836    81.23284   0.2415169
> C-B    55.00     5.017164   104.98284   0.0323006
>
> now, she says, since she can see that the only significant difference is
> between C and B treatments, then B and A are on the same group (no
> significant difference),  C and A are on the same group, but B anc C are not
> in the same group. so we should have two groups:
> A and B
> A and C
>
> Apparently SPSS output gives the homogeneous subsets.
>
>
> Do you know of a way in R to do that?
> Also, since I am unfamiliar with this presentation, do you believe it is
> useful/has value?
>
>
> Thanks,
> Tal
>
>
>
> Contact
> Details:---
> Contact me: tal.gal...@gmail.com |  972-52-7275845
> Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
> www.r-statistics.com (English)
> --
>
>        [[alternative HTML version deleted]]
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>



-- 
Assistant Professor / Dobelman Family Junior Chair
Department of Statistics / Rice University
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Can one get a list of recommended packages?

2010-06-12 Thread Tal Galili
Hello David,

I am not sure I understood your question.

Are you asking what are the packages that the R release comes with?
Or are you asking what recommended packages one should have when installing
R?  (There is a good list to start with
here
)

Also, are you asking how to not need to install new packages when upgrading
R?
(For that, you can have a look at a post I wrote on an alternative way for
upgrading R on 
windows,
which might give relevant ideas for your case as well)


Best,
Tal


Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Sat, Jun 12, 2010 at 4:37 PM, Dr. David Kirkby
wrote:

> R 2.10.1 is used in the Sage maths project. Several recommended packages
> (Matrix, class, mgcv, nnet, rpart, spatial, and survival) are failing to
> build on Solaris 10 (SPARC).
>
> We would like to be able to get a list of the recommended packages for R
> 2.10.1, but ideally via a call to R, so it is not necessary to update that
> list every time a new version of R is released. We do not want to access the
> Internet to get this information.
>
> Is there a way in R to list the recommended packages?
>
> Better still, is there a way to list the recommended packages which have
> not been installed, so getting a list of any failures?
>
> Dave
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Compiling R with multi-threaded BLAS math libraries - why not actually ?

2010-06-12 Thread Tal Galili
Hello Douglas,

Thank you for the BLAST!=BLAS correction (I imagine my slip was due to some
working I have done recently with an RNA analysis software called BLAST).

Also, thank you for the very interesting posting here and in your reply to
David's post.

My current conclusion from this thread are that:
1) This should be interesting ONLY if I will be working on large matrices
and doing "very specific
kinds of operations". (I imagine David's examples on his post demonstrate
those)
2) In case I would like to do it, I will need to go follow the actions
detailed here (thank you for the pointer):
http://cran.r-project.org/bin/windows/base/rw-FAQ.html#Can-I-use-a-fast-BLAS_003f
And more or less pray that my computer specification are relevant.
(Although
I do wonder how does REvolution distribution succeeds in doing this without
making the user do any more steps then just installing R)


Thanks everyone for the replies so far.


With much respect,
Tal









Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Sat, Jun 12, 2010 at 4:39 PM, Douglas Bates  wrote:

> On Sat, Jun 12, 2010 at 6:18 AM, Tal Galili  wrote:
> > Hello Gabor, Matt, Dirk.
> >
> > Thank you all for clarifying the situation.
> >
> > So if I understand correctly then:
> > 1) Changing the BLAST would require specific BLAST per computer
> > configuration (OS/chipset).
>
> It's BLAS (Basic Linear Algebra Subroutines) not BLAST.  Normally I
> wouldn't be picky like this but if you plan to use a search engine you
> won't find anything helpful under BLAST.
>
> > 2) The advantage would be available only when doing  _lots_ of linear
> > algebra
>
> You need to be working with large matrices and doing very specific
> kinds of operations before the time savings of multiple threads
> overcomes the communications overhead.  In fact, sometimes the
> accelerated BLAS can slow down numerical linear algebra calculations,
> such as sparse matrix operations.
>
> > So I am left wondering for each item:
> > 1) How do you find a "better" (e.g: more suited) BLAST for your system?
> (I
> > am sure there are tutorials for that, but if someone here has
> > a recommendation on one - it would be nice)
>
> As Dirk has pointed out, it is a simple process.
>
> Step 1: Install Ubuntu or some other Debian-based Linux system.
> Step 2: type
> sudo apt-get install r-base-core libatlas3gf-base
>
> > 2) In what situations do we use __lots" of linear algebra?  For example,
> I
> > have cases where I performed many linear regressions on a problem, would
> > that be a case the BLAST engine be effecting?
>
> Re-read David's posting.  The lm and glm functions do not benefit
> substantially from accelerated BLAS because the underlying
> computational methods only use level-1 BLAS. (David said they don't
> use BLAS but that is not quite correct.  I posted a follow-up comment
> describing why lm and glm don't benefit from accelerated BLAS.)
>
> > I am trying to understand if REvolution emphasis on this is a
> > marketing gimmick, or are they insisting on something that some R users
> > might wish to take into account.  In which case I would, naturally (for
> many
> > reasons), prefer to be able to tweak the native R system instead of
> needing
> > to work with REvolution distribution.
>
> As those who, in Duncan Murdoch's phrase, found the situation
> sufficiently extreme to cause them to read the documentation, would
> know, descriptions of using accelerated BLAS with R have been in the R
> administration manual for years.  Admittedly it is not a
> straightforward process but that is because, like so many other
> things, it needs to be handled differently on each operating system.
> In fact it is even worse because the procedure can be specific to the
> operating system and the processor architecture and, sometimes, even
> the task.  Again, re-read David's posting where he says that you
> probably don't want to combine multiple MKL threads with explicit
> parallel programming in R using doSMP.
>
> David's posting (appropriately) shows very specific examples that
> benefit greatly from accelerated BLAS.   Notice that these examples
> incorporate very large matrices.  The first two examples involve
> forming chol(crossprod(A)) where A is 1 by 5000.  If you have very
> specific structure in A this calculation might be meaningful.  In
> general, it is meaningless because crossprod(A) is almost certainly
> singular.  (I am vague on the details but perhaps someone who is
> familiar with the distribution of singular values of matrices can
> explain the theoretical results.  There is a whole field of statistics
> resear

Re: [R] Logic with regexps

2010-06-12 Thread Gabor Grothendieck
On Sat, Jun 12, 2010 at 5:38 AM, Ted Harding
 wrote:
> Greetings,
> The following question has come up in an off-list discussion.
> Is it possible to construct a regular expression 'rex' out of
> two given regular expressions 'rex1' and 'rex2', such that a
> character string X matches 'rex' if and only if X matches 'rex1'
> AND X does not match 'rex2'?
>
> The desired end result can be achieved by logically combining
> the results of a grep using 'rex1' with the results of a grep
> on 'rex2', illustrated by the following example:
>
> ## Given character vector X (below), and two regular exdpressions
> ## rex1="abc", rex2="ijk", to return the elements of X which match
> ## rex1 AND do not match rex1:
> X <- c(
>  "abcdefg",       # Yes
>  "abchijk",       # No
>  "mnopqrs",       # No
>  "ijkpqrs",       # No
>  "abcpqrs" )      # Yes
> rex1 <- "abc"
> rex2 <- "ijk"
> ix1<- grep(rex1,X)
> ix2<- grep(rex2,X)
> X[ix1[!(ix1 %in% ix2)]]
> ## [1] "abcdefg" "abcpqrs"
>
> Question: is there a way to construct 'rex' from 'rex1' and 'rex2'
> such that
>
>  X[grep(rex,X)]
>
> would given the same result?

Try this:

   rex <- "^(?!(.*ijk)).*abc"
   grep(rex, X, perl = TRUE)

Also note that X[grep(rex, X, perl = TRUE)] can be written:

   grep(rex, X, perl = TRUE, value = TRUE)

See ?regex for more info.  Further regular expression links can be
found in the External Links box on the gsubfn home page at
http://gsubfn.googlecode.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] meta analysis with repeated measure-designs?

2010-06-12 Thread Viechtbauer Wolfgang (STAT)
Dear Gerrit,

the most appropriate approach for data of this type would be a proper 
multivariate meta-analytic model (along the lines of Kalaian & Raudenbush, 
1996). Since you do not know the correlations of the reaction time measurements 
across conditions for the within-subject designs, a simple solution is to 
"guestimate" those correlations and then conduct sensitivity analyses to make 
sure your conclusions do not depend on those guestimates.

Best,

--
Wolfgang Viechtbauerhttp://www.wvbauer.com/
Department of Methodology and StatisticsTel: +31 (0)43 388-2277
School for Public Health and Primary Care   Office Location:
Maastricht University, P.O. Box 616 Room B2.01 (second floor)
6200 MD Maastricht, The Netherlands Debyeplein 1 (Randwyck)


Original Message
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On
Behalf Of Gerrit Hirschfeld Sent: Saturday, June 12, 2010 12:45
To: r-help@r-project.org
Subject: [R] meta analysis with repeated measure-designs?

> Dear all,
>
> I am trying to run a meta analysis of psycholinguistic reaction-time
> experiments with the meta package. The problem is that most of the
> studies have a within-subject designs and use repeated measures ANOVAs to
> analyze their data. So at present it seems that there are three
> non-optimal ways to run the analysis.
>
> 1. Using metacont() to estimate effect sizes and standard errors. But as
> the different sores are dependent this would result in biased estimators
> (Dunlap, 1996). Suppose I had the correlations of the measures (which I
> do not) would there by an option to use them in metacont() ?
>
> 2. Use metagen() with an effect size that is based on the reported F for
> the contrasts but has other disadvantages (Bakeman, 2005). The problem I
> am having with this is that I could not find a formular to compute the
> standard error of partial eta squared. Any Ideas?
>
> 3. Use metagen() with r computed from p-values (Rosenthal, 1994) as
> effect size with the problem that sample-size affects p as much as effect
> size.
>
> Is there a fourth way, or data showing that correlations can be neglected
> as long as they are assumed to be similar in the studies?
> Any ideas are much apprecciated.
>
> best regards
> Gerrit
>
> __
> Gerrit Hirschfeld, Dipl.-Psych.
>
> Psychologisches Institut II
> Westfälische Wilhelms-Universität
> Fliednerstr. 21
> 48149 Münster
> Germany
>
> psycholinguistics.uni-muenster.de
> GerritHirschfeld.de
> Fon.: +49 (0) 251 83-31378
> Fon.: +49 (0) 234 7960728
> Fax.: +49 (0) 251 83-34104
>
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] generating ordered, random decimal fractions

2010-06-12 Thread Kurt_Helf
Greetings
How do I do this in R?  Checking the Cran site produces a bewildering array
of packages that I can't seem to find to load.  Surely the main program has
this function?
Cheers
Kurt

***
Kurt Lewis Helf, Ph.D.
Invertebrate Ecologist
National Park Service
Cumberland Piedmont Network
P.O. Box 8
Mammoth Cave, KY 42259
Ph: 270-758-2163
Lab: 270-758-2151
Fax: 270-758-2609

Science, in constantly seeking real explanations, reveals the true majesty
of our world in all its complexity.
-Richard Dawkins

The scientific tradition is distinguished from the pre-scientific tradition
in having two layers.  Like the latter it passes on its theories but it
also passes on a critical attitude towards them.  The theories are passed
on not as dogmas but rather with the challenge to discuss them and improve
upon them.
-Karl Popper

...consider yourself a guest in the home of other creatures as significant
as yourself.
-Wayside at Wilderness Threshold in McKittrick Canyon, Guadalupe Mountains
National Park, TX

Cumberland Piedmont Network Forest Pest Monitoring Website
http://science.nature.nps.gov/im/units/cupn/monitor/forestpest/forest_pests.cfm


Cumberland Piedmont Network Cave Cricket Monitoring Website:
http://science.nature.nps.gov/im/units/cupn/monitor/cavecrickets/cavecrickets.cfm

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Displaying "homogeneous groups" in aov post-hoc results ?

2010-06-12 Thread Tal Galili
Hello dear R-help mailing list,

A friend of mine teaches a regression and experimental design course and
asked me the following question.

She is trying to find a way to display the "homogeneous groups" (after
performing tukey test on an aov object).

here's an example for what she means by "homogeneous groups":
She did one way anova and got these results for tukey test:
> TukeyHSD(hci_anova)
  Tukey multiple comparisons of means
95% family-wise confidence level
Fit: aov(formula = time ~ interface, data = hci)
$interface
 diff  lwr upr   p adj
B-A  -23.75   -73.73283626.23284   0.4165355
C-A31.25   -18.73283681.23284   0.2415169
C-B55.00 5.017164   104.98284   0.0323006

now, she says, since she can see that the only significant difference is
between C and B treatments, then B and A are on the same group (no
significant difference),  C and A are on the same group, but B anc C are not
in the same group. so we should have two groups:
A and B
A and C

Apparently SPSS output gives the homogeneous subsets.


Do you know of a way in R to do that?
Also, since I am unfamiliar with this presentation, do you believe it is
useful/has value?


Thanks,
Tal



Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Compiling R with multi-threaded BLAS math libraries - why not actually ?

2010-06-12 Thread Douglas Bates
On Sat, Jun 12, 2010 at 6:18 AM, Tal Galili  wrote:
> Hello Gabor, Matt, Dirk.
>
> Thank you all for clarifying the situation.
>
> So if I understand correctly then:
> 1) Changing the BLAST would require specific BLAST per computer
> configuration (OS/chipset).

It's BLAS (Basic Linear Algebra Subroutines) not BLAST.  Normally I
wouldn't be picky like this but if you plan to use a search engine you
won't find anything helpful under BLAST.

> 2) The advantage would be available only when doing  _lots_ of linear
> algebra

You need to be working with large matrices and doing very specific
kinds of operations before the time savings of multiple threads
overcomes the communications overhead.  In fact, sometimes the
accelerated BLAS can slow down numerical linear algebra calculations,
such as sparse matrix operations.

> So I am left wondering for each item:
> 1) How do you find a "better" (e.g: more suited) BLAST for your system? (I
> am sure there are tutorials for that, but if someone here has
> a recommendation on one - it would be nice)

As Dirk has pointed out, it is a simple process.

Step 1: Install Ubuntu or some other Debian-based Linux system.
Step 2: type
sudo apt-get install r-base-core libatlas3gf-base

> 2) In what situations do we use __lots" of linear algebra?  For example, I
> have cases where I performed many linear regressions on a problem, would
> that be a case the BLAST engine be effecting?

Re-read David's posting.  The lm and glm functions do not benefit
substantially from accelerated BLAS because the underlying
computational methods only use level-1 BLAS. (David said they don't
use BLAS but that is not quite correct.  I posted a follow-up comment
describing why lm and glm don't benefit from accelerated BLAS.)

> I am trying to understand if REvolution emphasis on this is a
> marketing gimmick, or are they insisting on something that some R users
> might wish to take into account.  In which case I would, naturally (for many
> reasons), prefer to be able to tweak the native R system instead of needing
> to work with REvolution distribution.

As those who, in Duncan Murdoch's phrase, found the situation
sufficiently extreme to cause them to read the documentation, would
know, descriptions of using accelerated BLAS with R have been in the R
administration manual for years.  Admittedly it is not a
straightforward process but that is because, like so many other
things, it needs to be handled differently on each operating system.
In fact it is even worse because the procedure can be specific to the
operating system and the processor architecture and, sometimes, even
the task.  Again, re-read David's posting where he says that you
probably don't want to combine multiple MKL threads with explicit
parallel programming in R using doSMP.

David's posting (appropriately) shows very specific examples that
benefit greatly from accelerated BLAS.   Notice that these examples
incorporate very large matrices.  The first two examples involve
forming chol(crossprod(A)) where A is 1 by 5000.  If you have very
specific structure in A this calculation might be meaningful.  In
general, it is meaningless because crossprod(A) is almost certainly
singular.  (I am vague on the details but perhaps someone who is
familiar with the distribution of singular values of matrices can
explain the theoretical results.  There is a whole field of statistics
research dealing with sparsity in the estimation of covariance
matrices that attacks exactly this "large n, large p" rank deficiency
problem.)

> Lastly, following on Matt suggestion, if any has a tutorial on the subject,
> I'd be more then glad to publish it on r-statistics/r-bloggers.
>
> Thanks again to everyone for the detailed replies.
>
> Best,
> Tal
>
>
>
>
> Contact
> Details:---
> Contact me: tal.gal...@gmail.com |  972-52-7275845
> Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
> www.r-statistics.com (English)
> --
>
>
>
>
> On Sat, Jun 12, 2010 at 6:01 AM, Matt Shotwell  wrote:
>
>> In the case of REvolution R, David mentioned using the Intel MKL,
>> proprietary library which may not be distributed in the way R is
>> distributed. Maybe REvolution has a license to redistribute the library.
>> For the others, I suspect Gabor has the right idea, that the R-core team
>> would rather not keep architecture dependent code in the sources,
>> although there is a very small amount already (`grep -R __asm__`).
>>
>> However, I know using Linux (Debian in particular) it is fairly
>> straightforward to build R with `enhanced' BLAS libraries. The R
>> Administration and Installation manual has a pretty good section on
>> linking with enhanced BLAS and LAPACK libs, including the Intel MKL, if
>> you are willing cough up $399, or swear not to use the library
>> commercially or academically.

[R] Can one get a list of recommended packages?

2010-06-12 Thread Dr. David Kirkby
R 2.10.1 is used in the Sage maths project. Several recommended packages 
(Matrix, class, mgcv, nnet, rpart, spatial, and survival) are failing to build 
on Solaris 10 (SPARC).


We would like to be able to get a list of the recommended packages for R 2.10.1, 
but ideally via a call to R, so it is not necessary to update that list every 
time a new version of R is released. We do not want to access the Internet to 
get this information.


Is there a way in R to list the recommended packages?

Better still, is there a way to list the recommended packages which have not 
been installed, so getting a list of any failures?


Dave

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extended Kalman filter for survival data

2010-06-12 Thread Christophe Dutang
Thanks Christos.

BayesX was strongly recommended also by Fahrmeir and other people of  
that field. So it is clearly where I need to look at!

Christophe

iPhone.fan

Le 12 juin 2010 à 04:46, Christos Argyropoulos   
a écrit :

>
> If you mean this paper by Fahrmeir: 
> http://biomet.oxfordjournals.org/cgi/content/abstract/81/2/317 
>  I would recommend  BayesX: http://www.stat.uni-muenchen.de/~bayesx/.
> BayesX interfaces with R and estimates discrete (and continuous)  
> time survival data with penalized regression methods.
> If you are looking for a bona fide Bayesian survival analysis method  
> and do not wish to spend a lot of time coming up and debugging your  
> MCMC implementations in WinBUGS/JAGS/OpenBUGS this would be the way  
> to go.
> If you are strictly after frequentist analyses then you can still  
> run them with BayesX (look at the REML chapter in the manual).
>
>
> Christos Argyropoulos
>
>
> > Date: Mon, 3 May 2010 23:18:28 +0200
> > From: duta...@gmail.com
> > To: r-help@r-project.org
> > Subject: [R] extended Kalman filter for survival data
> >
> > Dear all,
> >
> > I'm looking for an implementation of the generalized extended  
> Kalman filter
> > for survival data, presented in this article Fahrmeir (1994) -  
> 'dynamic
> > modelling for discrete time survival data'. The same author also  
> publish a
> > Bayesian version of the algorithm 'dynamic discrete-time duration  
> models'.
> >
> > The maintainer of the Survival task view advises me to take a look  
> at
> > http://cran.r-project.org/web/packages/sspir/index.html
> > Unfortunately, the pkg implements "only" dynamic GLM.
> >
> > That's why I'm asking on this list, if someone knows a package for  
> this
> > implementation?
> >
> > Thanks in advance
> >
> > Christophe
> >
> >
> >
> > PS: the pseudo vignette of the sspir pkg can be found here
> > http://www.jstatsoft.org/v16/i01/paper .
> >
> > --
> > Christophe DUTANG
> > Ph. D. student at ISFA
> >
> > [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
> Hotmail: Trusted email with powerful SPAM protection. Sign up now.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logic with regexps

2010-06-12 Thread Ted Harding
Thanks, Brian. I had indeed overlooked grepl() (too busy delving
into the syntax summary)! That is certainly a useful shortcut of
the construction I had used.

Your "Not in general" implies that using grep() twice (in this example;
more times in more complex combinations) is inevitable -- which of
course was part of the point of the query!

A very helpful reply. Thanks!
Ted.

On 12-Jun-10 12:46:57, Prof Brian Ripley wrote:
> I think you have missed grepl(), e.g.
> 
> X[grepl(rex1, X) & !grepl(rex2, X)]
> 
> grepl is a fairly recent addition (2.9.0) that is used extensively in 
> R's own text-processing operations (e.g. help files, utilities such as 
> 'R CMD check').
> 
> On Sat, 12 Jun 2010, ted.hard...@manchester.ac.uk wrote:
> 
>> Greetings,
>> The following question has come up in an off-list discussion.
>> Is it possible to construct a regular expression 'rex' out of
>> two given regular expressions 'rex1' and 'rex2', such that a
>> character string X matches 'rex' if and only if X matches 'rex1'
>> AND X does not match 'rex2'?
> 
> Not in general.
> 
>> The desired end result can be achieved by logically combining
>> the results of a grep using 'rex1' with the results of a grep
>> on 'rex2', illustrated by the following example:
>>
>> ## Given character vector X (below), and two regular exdpressions
>> ## rex1="abc", rex2="ijk", to return the elements of X which match
>> ## rex1 AND do not match rex1:
>> X <- c(
>>  "abcdefg",   # Yes
>>  "abchijk",   # No
>>  "mnopqrs",   # No
>>  "ijkpqrs",   # No
>>  "abcpqrs" )  # Yes
>> rex1 <- "abc"
>> rex2 <- "ijk"
>> ix1<- grep(rex1,X)
>> ix2<- grep(rex2,X)
>> X[ix1[!(ix1 %in% ix2)]]
>> ## [1] "abcdefg" "abcpqrs"
>>
>> Question: is there a way to construct 'rex' from 'rex1' and 'rex2'
>> such that
>>
>>  X[grep(rex,X)]
>>
>> would given the same result?
>>
>> I've not managed to find anything helpful in desciptions of
>> regular expression syntax, though one feels it should be possible
>> if this is capable of supporting a logically complete language!
>>
>> With thanks,
>> Ted.
> 
> -- 
> Brian D. Ripley,  rip...@stats.ox.ac.uk
> Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
> University of Oxford, Tel:  +44 1865 272861 (self)
> 1 South Parks Road, +44 1865 272866 (PA)
> Oxford OX1 3TG, UKFax:  +44 1865 272595
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.


E-Mail: (Ted Harding) 
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jun-10   Time: 14:14:05
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Logic with regexps

2010-06-12 Thread Prof Brian Ripley

I think you have missed grepl(), e.g.

X[grepl(rex1, X) & !grepl(rex2, X)]

grepl is a fairly recent addition (2.9.0) that is used extensively in 
R's own text-processing operations (e.g. help files, utilities such as 
'R CMD check').


On Sat, 12 Jun 2010, ted.hard...@manchester.ac.uk wrote:


Greetings,
The following question has come up in an off-list discussion.
Is it possible to construct a regular expression 'rex' out of
two given regular expressions 'rex1' and 'rex2', such that a
character string X matches 'rex' if and only if X matches 'rex1'
AND X does not match 'rex2'?


Not in general.


The desired end result can be achieved by logically combining
the results of a grep using 'rex1' with the results of a grep
on 'rex2', illustrated by the following example:

## Given character vector X (below), and two regular exdpressions
## rex1="abc", rex2="ijk", to return the elements of X which match
## rex1 AND do not match rex1:
X <- c(
 "abcdefg",   # Yes
 "abchijk",   # No
 "mnopqrs",   # No
 "ijkpqrs",   # No
 "abcpqrs" )  # Yes
rex1 <- "abc"
rex2 <- "ijk"
ix1<- grep(rex1,X)
ix2<- grep(rex2,X)
X[ix1[!(ix1 %in% ix2)]]
## [1] "abcdefg" "abcpqrs"

Question: is there a way to construct 'rex' from 'rex1' and 'rex2'
such that

 X[grep(rex,X)]

would given the same result?

I've not managed to find anything helpful in desciptions of
regular expression syntax, though one feels it should be possible
if this is capable of supporting a logically complete language!

With thanks,
Ted.


--
Brian D. Ripley,  rip...@stats.ox.ac.uk
Professor of Applied Statistics,  http://www.stats.ox.ac.uk/~ripley/
University of Oxford, Tel:  +44 1865 272861 (self)
1 South Parks Road, +44 1865 272866 (PA)
Oxford OX1 3TG, UKFax:  +44 1865 272595

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] help with npcdens function package np

2010-06-12 Thread Steve Friedman
Hello,

I'm trying to calculate non-parametric probabilities using the np package
and having some difficulties.
OS is Windows, R version 2.11.1

Here is what I've done so far.

library(np)

veg <- data.frame(factor(Physiogomy), meanAnnualDepthAve, TP)

attach(veg) : for clarification dim(veg) returns 1292 3

fy.x <- npcdens(veg$factor.Physiogomy ~ veg$meanAnnualDepthAve, nmulti=1)
#  this works, but I haven't found any information explaining what the
nmulti=1 term is doing?  Does this set the number of levels in the factor?
My data actually has 8 types, can I develop this to treat each one in a
single function ?

veg.eval <- data.frame(Physiogomy = factor('Marl"}, meanAnnualDepthAve =
seq(min(meanAnnualDepthAve), max(meanAnnualDepthAve))
#  This also works, however where does the 4755 records originate

str(veg.eval)
' data.frame':   4755 obs of 2 variables
 $  Physiogomy: factor w / 1 level  "Marl" :   1   1
1  1  
 $ meanAnnualDepthAve : num   -592, -591 - 590  - 578

because the data frame veg only contains 1292 records the is a mismatch
between the 4755 records.  Why are so many records produced in the veg.eval
statement and how can i constrain it to be consistent with the dimensions of
veg ?

plot(x, y, type = "l", lty="2", col='red' , xlab = "Mean Annual Depth",
ylab="Estimated Prob of Marl")
 lines(veg.eval$meanAnnualDepthAve, predict(fy.x, newdata=veg.eval),
col='blue')

I'm following an example I found Here:
http://en.wikipedia.org/wiki/Density_estimation

Your help is greatly appreciated.

Thanks
Steve

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] points marking

2010-06-12 Thread khush ........
Hi,

Well Thanks for letting me know that pch is of no use with segments petr. I
am using lend as it suits to me more as gregory suggested , but I am not
getting imite???  think I try to fix it with some other method also, as I
have to deal more with the symbols in this case, But I want to the know one
thing from you guys that the way I am using the code is good enough to
start, as I am not much familiar with this suff or its dirty way to handle
such task. please let me know.

Thanks gregory and petr.

Thank you
Jeet



On Fri, Jun 11, 2010 at 9:07 PM, Greg Snow  wrote:

>  Those graphs look like chromosome maps, if so, you may want to look into
> the bioconductor project, they may have some prewritten functions to do
> this.  If not, the lend argument (see ?par) may be something to look at.  If
> you really want points and segments you will need to plot the points with
> the points function and the segments separately.  Segments can take vectors,
> so you don’t need to separate things into multiple calls.
>
>
>
> --
>
> Gregory (Greg) L. Snow Ph.D.
>
> Statistical Data Center
>
> Intermountain Healthcare
>
> greg.s...@imail.org
>
> 801.408.8111
>
>
>
> *From:* khush  [mailto:bioinfo.kh...@gmail.com]
> *Sent:* Friday, June 11, 2010 12:00 AM
> *To:* Greg Snow
> *Cc:* r-help@r-project.org
> *Subject:* Re: [R] points marking
>
>
>
> Dear Gregory ,
>
> Thnaks for your reply and help. I am explaining you my problems again,
> below  is my script for the same .
>
> Dom <-c (195,568,559)
>
> fkbp <- barplot (Dom, col="black", xlab="", border = NA, space = 7,
> xlim=c(0,650), ylim =c(0, 87), las = 2, horiz = TRUE)
>
> axis (1, at = seq(0,600,10), las =2)
>
> 1. ==Segments 1=
>
> segments(164,7.8,192,7.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(45,15.8,138,15.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(160,15.8,255,15.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(277,15.8,378,15.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(51,23.8,145,23.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(167,23.8,262,23.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
> segments(284,23.8,381,23.8, col = "green", pch=23, cex="9", lty="solid",
> lwd=20)
>
> 2. ==Segments 2 ==
> segments(399,15.8,432,15.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
> segments(448,15.8,475,15.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
> segments(486,15.8,515,15.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
> segments(401,23.8,434,23.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
> segments(450,23.8,475,23.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
> segments(486,23.8,517,23.8, col = "blue", pch=21, cex="9", lty="solid",
> lwd=20)
>
> I solved one part of my query i.e to mark points from one positions to
> other is ok and I found that its working fine but I have another issue now,
> as I am using using two segments data 1 and 2 , although I want to draw
> different shapes for segmants 2 as I am giving pch=21, but I it seems to
> give a solid line for both. I want to draw different shapes for every chunk
> of segments i.e is the whole point.
>
> I want to make script which can generate such figures, below is link to one
> of the tool.
> http://www.expasy.ch/tools/mydomains/
>
> Thank you
>
> Jeet
>
>  On Thu, Jun 10, 2010 at 11:10 PM, Greg Snow  wrote:
>
> Your question is not really clear, do either of these examples do what you
> want?
>
>  with(anscombe, plot(x1, y2, ylim=range(y2,y3)) )
>  with(anscombe, points(x1, y3, col='blue', pch=2) )
>  with(anscombe, segments(x1, y2, x1, y3, col=ifelse( y2>y3, 'green','red')
> ) )
>
>
>  with(anscombe, plot(x1, y2, ylim=range(y2,y3), type='n') )
>  with(anscombe[order(anscombe$x1),], polygon( c( x1,rev(x1) ), c(y2,
> rev(y3)), col='grey' ) )
>
>
>
> --
> Gregory (Greg) L. Snow Ph.D.
> Statistical Data Center
> Intermountain Healthcare
> greg.s...@imail.org
> 801.408.8111
>
>
>
> > -Original Message-
> > From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-
> > project.org] On Behalf Of khush 
> > Sent: Thursday, June 10, 2010 7:48 AM
> > To: r-help@r-project.org
> > Subject: [R] points marking
> >
> > Hi,
> >
> > How to  mark points on x axis of a graph keeping x axis as constant and
> > changing y from y1 to y2 respectively. I want to highlight the area
> > from y1
> > to y2.
> >
> > Any suggestions
> >
> > Thank you
> > Jeet
> >
>
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide http://www.R-project.org/posting-
> > guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>
>

[[alternative HTML version

Re: [R] Compiling R with multi-threaded BLAS math libraries - why not actually ?

2010-06-12 Thread Tal Galili
Hello Gabor, Matt, Dirk.

Thank you all for clarifying the situation.

So if I understand correctly then:
1) Changing the BLAST would require specific BLAST per computer
configuration (OS/chipset).
2) The advantage would be available only when doing  _lots_ of linear
algebra


So I am left wondering for each item:
1) How do you find a "better" (e.g: more suited) BLAST for your system? (I
am sure there are tutorials for that, but if someone here has
a recommendation on one - it would be nice)
2) In what situations do we use __lots" of linear algebra?  For example, I
have cases where I performed many linear regressions on a problem, would
that be a case the BLAST engine be effecting?
I am trying to understand if REvolution emphasis on this is a
marketing gimmick, or are they insisting on something that some R users
might wish to take into account.  In which case I would, naturally (for many
reasons), prefer to be able to tweak the native R system instead of needing
to work with REvolution distribution.

Lastly, following on Matt suggestion, if any has a tutorial on the subject,
I'd be more then glad to publish it on r-statistics/r-bloggers.

Thanks again to everyone for the detailed replies.

Best,
Tal




Contact
Details:---
Contact me: tal.gal...@gmail.com |  972-52-7275845
Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
www.r-statistics.com (English)
--




On Sat, Jun 12, 2010 at 6:01 AM, Matt Shotwell  wrote:

> In the case of REvolution R, David mentioned using the Intel MKL,
> proprietary library which may not be distributed in the way R is
> distributed. Maybe REvolution has a license to redistribute the library.
> For the others, I suspect Gabor has the right idea, that the R-core team
> would rather not keep architecture dependent code in the sources,
> although there is a very small amount already (`grep -R __asm__`).
>
> However, I know using Linux (Debian in particular) it is fairly
> straightforward to build R with `enhanced' BLAS libraries. The R
> Administration and Installation manual has a pretty good section on
> linking with enhanced BLAS and LAPACK libs, including the Intel MKL, if
> you are willing cough up $399, or swear not to use the library
> commercially or academically.
>
> Maybe a short tutorial using free software, such as ATLAS would be
> suitable content for an r-bloggers post :) ?
>
> Matt Shotwell
> Graduate Student
> Div. Biostatistics and Epidemiology
> Medical University of South Carolina
>
> On Fri, 2010-06-11 at 19:21 -0400, Tal Galili wrote:
> > Hello all,
> > I came across<
> http://www.r-bloggers.com/performance-benefits-of-linking-r-to-multithreaded-math-libraries/
> >
> > David
> > Smith's new post
> > Performance benefits of linking R to multithreaded math
> > libraries<
> http://blog.revolutionanalytics.com/2010/06/performance-benefits-of-multithreaded-r.html
> >
> > Which explains how (and why) REvolution distribution of R uses
> > different BLAS math libraries for R, so to
> > allow multi-threaded mathematical computation.
> > What the post doesn't explain is why it is that native R distribution
> > doesn't use the multi-threaded version of the libraries.  Is it because
> > R-devel team didn't get to it yet or is it for some technical reason.
> > Could someone please help to explain the situation?
> >
> > Thanks in advance,
> > Tal
> >
> > p.s: I wasn't sure if to send the question here or to R-devel, I decided
> to
> > send it here.  If I am in the wrong - please let me know.
> >
> >
> >
> > Contact
> > Details:---
> > Contact me: tal.gal...@gmail.com |  972-52-7275845
> > Read me: www.talgalili.com (Hebrew) | www.biostatistics.co.il (Hebrew) |
> > www.r-statistics.com (English)
> >
> --
> >
> >   [[alternative HTML version deleted]]
> >
> > __
> > R-help@r-project.org mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-help
> > PLEASE do read the posting guide
> http://www.R-project.org/posting-guide.html
> > and provide commented, minimal, self-contained, reproducible code.
>
>

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] meta analysis with repeated measure-designs?

2010-06-12 Thread Gerrit Hirschfeld
Dear all,

I am trying to run a meta analysis of psycholinguistic reaction-time 
experiments with the meta package. The problem is that most of the studies have 
a within-subject designs and use repeated measures ANOVAs to analyze their 
data. So at present it seems that there are three non-optimal ways to run the 
analysis. 

1. Using metacont() to estimate effect sizes and standard errors. But as the 
different sores are dependent this would result in biased estimators (Dunlap, 
1996). Suppose I had the correlations of the measures (which I do not) would 
there by an option to use them in metacont() ?

2. Use metagen() with an effect size that is based on the reported F for the 
contrasts but has other disadvantages (Bakeman, 2005). The problem I am having 
with this is that I could not find a formular to compute the standard error of 
partial eta squared. Any Ideas?

3. Use metagen() with r computed from p-values (Rosenthal, 1994) as effect size 
with the problem that sample-size affects p as much as effect size. 

Is there a fourth way, or data showing that correlations can be neglected as 
long as they are assumed to be similar in the studies? 
Any ideas are much apprecciated. 

best regards
Gerrit

__
Gerrit Hirschfeld, Dipl.-Psych.

Psychologisches Institut II
Westfälische Wilhelms-Universität
Fliednerstr. 21
48149 Münster
Germany

psycholinguistics.uni-muenster.de
GerritHirschfeld.de
Fon.: +49 (0) 251 83-31378
Fon.: +49 (0) 234 7960728
Fax.: +49 (0) 251 83-34104

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Logic with regexps

2010-06-12 Thread Ted Harding
Greetings,
The following question has come up in an off-list discussion.
Is it possible to construct a regular expression 'rex' out of
two given regular expressions 'rex1' and 'rex2', such that a
character string X matches 'rex' if and only if X matches 'rex1'
AND X does not match 'rex2'?

The desired end result can be achieved by logically combining
the results of a grep using 'rex1' with the results of a grep
on 'rex2', illustrated by the following example:

## Given character vector X (below), and two regular exdpressions
## rex1="abc", rex2="ijk", to return the elements of X which match 
## rex1 AND do not match rex1:
X <- c(
  "abcdefg",   # Yes
  "abchijk",   # No
  "mnopqrs",   # No
  "ijkpqrs",   # No
  "abcpqrs" )  # Yes
rex1 <- "abc"
rex2 <- "ijk"
ix1<- grep(rex1,X)
ix2<- grep(rex2,X)
X[ix1[!(ix1 %in% ix2)]]
## [1] "abcdefg" "abcpqrs"

Question: is there a way to construct 'rex' from 'rex1' and 'rex2'
such that

  X[grep(rex,X)]

would given the same result?

I've not managed to find anything helpful in desciptions of
regular expression syntax, though one feels it should be possible
if this is capable of supporting a logically complete language!

With thanks,
Ted.


E-Mail: (Ted Harding) 
Fax-to-email: +44 (0)870 094 0861
Date: 12-Jun-10   Time: 10:38:45
-- XFMail --

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] mob (party package) question

2010-06-12 Thread tudor

Dear useRs:

I try to use mob from the party package (thanks Achim and Co.!) to model
based recursive partition a data set.  The model is a logistic regression
specified with model=glinearModel and family=binomial().  Running mob
results in a few warnings of the type: In glm.fit ... algorithm did not
converge.  As I speculate that this may be due to an insufficient number of
iterations I am wondering if any of you knows how to pass arguments to
glm.fit from within mob (e.g., epsilon and maxit).  All my attempts to do it
by myself failed.  All suggestions are welcome.

My system: Windows XP, R2.10.1.  

Thank you.

Tudor
-- 
View this message in context: 
http://r.789695.n4.nabble.com/mob-party-package-question-tp2252500p2252500.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] extended Kalman filter for survival data

2010-06-12 Thread Christos Argyropoulos






If you mean this paper by Fahrmeir: 
http://biomet.oxfordjournals.org/cgi/content/abstract/81/2/317 I would 
recommend  BayesX: http://www.stat.uni-muenchen.de/~bayesx/.
BayesX interfaces with R and estimates discrete (and continuous) time survival 
data with penalized regression methods.
If you are looking for a bona fide Bayesian survival analysis method and do not 
wish to spend a lot of time coming up and debugging your MCMC implementations 
in WinBUGS/JAGS/OpenBUGS this would be the way to go.
If you are strictly after frequentist analyses then you can still run them with 
BayesX (look at the REML chapter in the manual).


Christos Argyropoulos 


> Date: Mon, 3 May 2010 23:18:28 +0200
> From: duta...@gmail.com
> To: r-help@r-project.org
> Subject: [R] extended Kalman filter for survival data
> 
> Dear all,
> 
> I'm looking for an implementation of the generalized extended Kalman filter
> for survival data, presented in this article Fahrmeir (1994) - 'dynamic
> modelling for discrete time survival data'. The same author also publish a
> Bayesian version of the algorithm 'dynamic discrete-time duration models'.
> 
> The maintainer of the Survival task view advises me to take a look at
> http://cran.r-project.org/web/packages/sspir/index.html
> Unfortunately, the pkg implements "only" dynamic GLM.
> 
> That's why I'm asking on this list, if someone knows a package for this
> implementation?
> 
> Thanks in advance
> 
> Christophe
> 
> 
> 
> PS: the pseudo vignette of the sspir pkg can be found here
> http://www.jstatsoft.org/v16/i01/paper .
> 
> -- 
> Christophe DUTANG
> Ph. D. student at ISFA
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
  
_
Hotmail: Trusted email with powerful SPAM protection.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] sleep timer resolution on OSX

2010-06-12 Thread ivo welch
I am doing some timing experiments.  To test looping performance, I
used the Sys.sleep function.  I noticed something in the docs that is
just a little misleading:

" The resolution of the time
 interval is system-dependent, but will normally be down to 0.02
 secs or better. (On modern Unix-alikes it will be better than
 1ms.)"

on OSX, which is (almost) a modern Unix-alike, and a very common platform,

> system.time( for (i in 1:100) Sys.sleep(0.001) )
   user  system elapsed
  0.005   0.004   1.020
> system.time( for (i in 1:100) Sys.sleep(0.01) )
   user  system elapsed
  0.005   0.004   1.019

so, the resolution seems to be about 0.01 seconds.  under linux,
similar time experiments show that the resolution is under 0.0001
seconds.

just wanted to put this into the r-archives for google searches.  (if
I could make changes to the docs, I would note it there instead.)

hope this helps someone else...

iaw


Ivo Welch (ivo.we...@brown.edu, ivo.we...@gmail.com)

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.