[R] xyplot() groups scope issue

2007-08-30 Thread Mike Lawrence
Hi all,

Tinkering with a wrapper for xyplot that will help me plot a bunch of  
plots in a data analysis I'm doing and I ran into an odd error that  
I'm guessing is a scope issue. Here's a very simple version of the code:


###
#load lattice
library(lattice)

#create the wrapper function
do.xyplot = function( plot.formula, data.frame, plot.groups){

print(plot.groups) #show me what the wrapper function thinks  
'plot.groups' is

xyplot(
plot.formula
,data = data.frame
,groups = eval(plot.groups)
)
}

#create some data
mydata = as.data.frame(cbind( x = c(1:4), y = rep(1:2), z = rep(1:2,  
each = 2) ))

#try to plot the data (fails)
do.xyplot(
plot.formula=formula(x~y)
,data.frame = mydata
,plot.groups = expression(z)
)


#after error message try again, this time declaring plot.groups as a  
global variable (succeeds)
plot.groups = expression(z)
do.xyplot(
plot.formula=formula(x~y)
,data.frame = mydata
,plot.groups = expression(z)
)

#

I'm fine with declaring plot.groups before each call to do.xyplot,  
but I'm curious if there's a simpler solution.

Mike

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] logistic regression

2007-07-26 Thread Mike Lawrence
Maybe try making sure the data is numeric:

fac.to.num=function(x) as.numeric(as.character(x))


On 26-Jul-07, at 9:34 AM, Sullivan, Mary M wrote:

 Greetings,


 I am working on a logistic regression model in R and I am  
 struggling with the code, as it is a relatively new program for  
 me.  In searching Google for 'logistic regression diagnostics' I  
 came Elizabeth Brown's Lecture 14 from her Winter 2004  
 Biostatistics 515 course  (http://courses.washington.edu/b515/ 
 l14.pdf) .  I found most of the code to be very helpful, but I am  
 struggling with the lines on to calculate the observed and expected  
 values in the 10 groups created by the cut function.  I get error  
 messages in trying to create the E and O matrices:  R won't accept  
 assignment of fi1c==j and it won't calculate the sum.



 I am wondering whether someone might be able to offer me some  
 assistance...my search of the archives was not fruitful.



 Here is the code that I adapted from the lecture notes:



 fit - fitted(glm.lyme)

 fitc - cut(fit, br = c(0, quantile(fit, p = seq(.1, .9, .1)),1))

 t-table(fitc)

 fitc - cut(fit, br = c(0, quantile(fit, p = seq(.1, .9, .1)), 1),  
 labels = F)

 t-table(fitc)



 #Calculate observed and expected values in ea group

 E - matrix(0, nrow=10, ncol = 2)

 O - matrix(0, nrow=10, ncol=2)

 for (j in 1:10) {

   E[j, 2] = sum(fit[fitc==j])

   E[j, 1] = sum((1- fit)[fitc==j])

   O[j, 2] = sum(pcdata$lymdis[fitc==j])

   O[j, 1] = sum((1-pcdata$lymdis)[fitc==j])



 }



 Here is the error message:  Error in Summary.factor(..., na.rm =  
 na.rm) :
 sum not meaningful for factors





 I understand what it means; I just can't figure out how to get  
 around it or how to get the output printed in table form.  Thank  
 you in advance for any assistance.



 Mary Sullivan

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] using R for a reaction-time experiment

2007-07-23 Thread Mike Lawrence
Psych grad student here, R user for 3 years. Using R for experiments  
is likely not advisable as it has no fine control of display timing  
(ex synching stimuli to the screen refresh, etc). On recommendation  
of colleagues I'm learning the Python language, which has a module  
called 'pygame' that is fantastic for psych experiments.

Mike


On 22-Jul-07, at 1:38 PM, Seth Roberts wrote:


 I want to use R to run a reaction-time experiment: Something  
 appears on the
 screen, I respond by typing something (one keystroke), the system  
 measures
 the speed of my response. R would be great for this if only I  
 didn't have to
 hit Enter to enter that keystroke. I am doing such experiments now  
 but they
 require two actions per trial: hit keystroke, hit Enter.

 Is there some way that R can be made to respond to a single  
 keystroke (other
 than Enter)?
 -- 
 View this message in context: http://www.nabble.com/using-R-for-a- 
 reaction-time-experiment-tf4125643.html#a11732474
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] (R) Using arguments for the empirical cumulative distribution function

2007-07-20 Thread Mike Lawrence
?plot.ecdf()

On 20-Jul-07, at 6:57 AM, squall44 wrote:


 Is there no one who can help me? :,(
 --  
 View this message in context: http://www.nabble.com/%28R%29-Using- 
 arguments-for-the-empirical-cumulative-distribution-function- 
 tf4111355.html#a11705448
 Sent from the R help mailing list archive at Nabble.com.

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] linear interpolation of multiple random time series

2007-07-19 Thread Mike Lawrence
Hi all,

Looking for tips on how I might more optimally solve this. I have  
time series data (samples from a force sensor) that are not  
guaranteed to be sampled at the same time values across trials. ex.

trial   timex
1   1   1
1   5   4
1   7   9
1   12  20
2   1   0
2   3   5
2   9   10
2   13  14
2   19  22
2   24  32

Within each trial I'd like to use linear interpolation between each  
successive time sample to fill in intermediary timepoints and x- 
values, ex.

trial   timex
1   1   1
1   2   1.75
1   3   2.5
1   4   3.25
1   5   4
1   6   6.5
1   7   9
1   8   11.2
1   9   13.4
1   10  15.6
1   11  17.8
1   12  20
2   1   0
2   2   2.5
2   3   5
2   4   5.83
2   5   6.67
2   6   7.5
2   7   8.33
2   8   9.17
2   9   10
2   10  11
2   11  12
2   12  13
2   13  14
2   14  15.3
2   15  16.7
2   16  18
2   17  19.3
2   18  20.7
2   19  22
2   20  24
2   21  26
2   22  28
2   23  30
2   24  32


The solution I've coded (below) involves going through the original  
data frame line by line and is thus very slow (indeed, I had to  
resort to writing to file as with a large data set I started running  
into memory issues if I tried to create the new data frame in  
memory). Any suggestions on a faster way to achieve what I'm trying  
to do?

#assumes the first data frame above is stored as 'a'
arows = (length(a$x)-1)
write('', 'temp.txt')
for(i in 1:arows){
if(a$time[i+1]  a$time[i]){
write.table(a[i,], 'temp.txt', row.names = F, col.names = F, 
append  
= T)
x1 = a$time[i]
x2 = a$time[i+1]
dx = x2-x1
if(dx != 1){
y1 = a$x[i]
y2 = a$x[i+1]
dy = y2-y1
slope = dy/dx
int = -slope*x1+y1
temp=a[i,]
for(j in (x1+1):(x2-1)){
temp$time = j
temp$x = slope*j+int
write.table(temp, 'temp.txt', row.names = F, 
col.names = F,  
append = T)
}
}
}else{
write.table(a[i,], 'temp.txt', row.names = F, col.names = F, 
append  
= T)
}
}
i=i+1
write.table(a[i,], 'temp.txt', row.names = F, col.names = F, append = T)

b=read.table('temp.txt',skip=1)
names(b)=names(a)

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] linear interpolation of multiple random time series

2007-07-19 Thread Mike Lawrence
Jim Holtman's solution works great, and will try the zoo method just  
for fun as well.

Thanks to all of you :)

Mike

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How to read many files at one time?

2007-07-14 Thread Mike Lawrence

On 14-Jul-07, at 6:16 PM, Zhang Jian wrote:

 I want to load many files in the R. The names of the files are  
 Sim1.txt, 
 Sim2.txt, Sim3.txt, Sim4.txt, Sim5.txt and so on.
 Can I read them at one time? What should I do? I can give the same  
 names in
 R.
 Thanks.

 For example:
 tst=paste(Sim,1:20,.txt,sep=) # the file names
 tst
  [1] Sim1.txt  Sim2.txt  Sim3.txt  Sim4.txt  Sim5.txt   
 Sim6.txt
  [7] Sim7.txt  Sim8.txt  Sim9.txt  Sim10.txt Sim11.txt  
 Sim12.txt
 [13] Sim13.txt Sim14.txt Sim15.txt Sim16.txt Sim17.txt  
 Sim18.txt
 [19] Sim19.txt Sim20.txt

 data.name=paste(Sim,1:20,sep=) # the file names in R
 data.name
  [1] Sim1  Sim2  Sim3  Sim4  Sim5  Sim6  Sim7   
 Sim8  Sim9
 [10] Sim10 Sim11 Sim12 Sim13 Sim14 Sim15 Sim16  
 Sim17 Sim18
 [19] Sim19 Sim20

you could read each data file as a named element into a single list
data=list(NULL)
for(i in 1:length(tst)){
data[[i]]=read.table(tst[i])
names(data)[i]=data.name[i]
}

if you want to refer to Sim1, type
data$Sim1

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Power calculation with measurement error

2007-06-26 Thread Mike Lawrence
Hi all,

Hopefully this will be quick, I'm looking for pointers to packages/ 
functions that would allow me to calculate the power of a t.test when  
the DV has measurement error. That is, I understand that, ceteris  
paribus, experiments using measure with more error (lower  
reliability) will have lower power.

Mike

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] connecting to running process possible?

2007-06-26 Thread Mike Lawrence
If you get the running process to write its results to file, then you  
could run a loop in R asking it to repeatedly read the file and plot  
the contents. It may work best if you plot to a file like a pdf.

On 22-Jun-07, at 1:06 PM, Charles Cosse wrote:

 Hello,

 i'm trying to find a more modern system to reproduce the  
 functionality that
 was available through the Histoscope program (from Fermilab).   
 Namely, the
 capability of connecting to a running process and having plots  
 update in
 realtime in response to new data.  Is this possible with R?  Thank  
 you,

 Charles Cosse

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] surprising difference in log()

2007-06-26 Thread Mike Lawrence
According to the description of floor(), the latter result is the  
correct one:

'floor takes a single numeric argument x and returns a numeric vector  
containing the largest integers *not greater than* the corresponding  
elements of x.' (emphasis added)

floor(3) == 2
 True


On 26-Jun-07, at 11:09 AM, Fausto Galli wrote:


 Hello everybody

 My collegue and I noticed a strange behaviour of R on different
 platforms. It's a simple computation, but results are rather  
 different.

 On Windows XP:

 floor(log(8,2))
 [1] 3

 which is what one should expect.
 Here's instead the result with Mac OS X (same version, 2.5.0
 (2007-04-23))

 floor(log(8,2))
 [1] 2

 Is it a bug in R or in the operating system?
 Anyway, it's quite a surprising one.





 _
 Fausto Galli
 Institute of Finance
 University of Lugano
 Via G. Buffi 13
 CH-6904 Lugano, Switzerland.
 +41 (0)58 666 4497
 http://www.people.lu.unisi.ch/gallif

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Power calculation with measurement error

2007-06-26 Thread Mike Lawrence
Thanks Greg, I've actually been programming precisely what you  
suggest since sending the request this morning (though your email was  
indeed helpful; I've never seen 'replicate()' and will see if it's  
faster than a loop).

However, I was hoping that an analytic solution was extant and  
implemented somewhere.


On 26-Jun-07, at 12:18 PM, Greg Snow wrote:

 I don't know of a current package that does this (others may), but if
 you know what you expect your data to look like you can simulate it  
 and
 calculate power that way.

 Basically, write a function that will simulate data with the level of
 measurement error that you expect in the real data (or have the amount
 of measurement error passed in as a parameter so you can examine the
 effect of diffenent values).  Then have the function compute the t  
 test
 (or other test that you plan to do) and return the p-value from the
 test.

 Then you can simulate the process with a command like:

 out1 - replicate( 1000, myfunction(n=25, err=.1, diff=.5) )

 And compute the power with:

 mean( out1  0.05 ) # or whatever alpha value you want.

 Hope this helps,

 -- 
 Gregory (Greg) L. Snow Ph.D.
 Statistical Data Center
 Intermountain Healthcare
 [EMAIL PROTECTED]
 (801) 408-8111



 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Mike Lawrence
 Sent: Tuesday, June 26, 2007 5:13 AM
 To: Rhelp
 Subject: [R] Power calculation with measurement error

 Hi all,

 Hopefully this will be quick, I'm looking for pointers to
 packages/ functions that would allow me to calculate the
 power of a t.test when the DV has measurement error. That is,
 I understand that, ceteris paribus, experiments using measure
 with more error (lower
 reliability) will have lower power.

 Mike

 --
 Mike Lawrence
 Graduate Student, Department of Psychology, Dalhousie University

 Website: http://memetic.ca

 Public calendar: http://icalx.com/public/informavore/Public

 The road to wisdom? Well, it's plain and simple to express:
 Err and err and err again, but less and less and less.
  - Piet Hein

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] surprising difference in log()

2007-06-26 Thread Mike Lawrence
Sorry, my mistake. I should really leave looking at the help list  
until AFTER my morning coffee :P

On 26-Jun-07, at 11:31 AM, Duncan Murdoch wrote:

 On 6/26/2007 10:20 AM, Mike Lawrence wrote:
 According to the description of floor(), the latter result is the
 correct one:

 'floor takes a single numeric argument x and returns a numeric vector
 containing the largest integers *not greater than* the corresponding
 elements of x.' (emphasis added)

 floor(3) == 2
 True

 3 is not greater than 3, but it is greater than 2, so the result you
 quote above is wrong.  You should see

 floor(3)
   [1] 3

 floor(3) == 2
 [1] FALSE

 Do you really see the result you posted?

 Duncan Murdoch

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Power calculation with measurement error

2007-06-26 Thread Mike Lawrence
On 26-Jun-07, at 8:12 AM, Mike Lawrence wrote:
 Hi all,
 Hopefully this will be quick, I'm looking for pointers to packages/
 functions that would allow me to calculate the power of a t.test when
 the DV has measurement error. That is, I understand that, ceteris
 paribus, experiments using measure with more error (lower
 reliability) will have lower power.

I came across a reference (http://memetic.ca/reliability.pdf) that  
provides a formula for calculating the noncentrality parameter for  
tests using imperfect measures (see Eq. 4), as well as a table of  
some resulting power estimates. However, while I have created a (very  
slow) monte carlo function that so far as I can tell matches their  
results, when I attempt to implement their analytic solution it's way  
off. Can anyone see what I'm doing incorrectly?

n=100
r=.5 #reliability
e=.5 #effect size
delta=(sqrt(r*n)/2)*e
power.t.test(n,delta,sig.level=.05,alternative='one.sided')

  Two-sample t test power calculation

   n = 100
   delta = 1.767767
  sd = 1
   sig.level = 0.05
   power = 1
 alternative = one.sided

NOTE: n is number in *each* group


Meanwhile, their tables and my monte carlo method say that the power  
in that circumstance should be .7

Help?

Mike

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Power calculation with measurement error

2007-06-26 Thread Mike Lawrence

On 26-Jun-07, at 2:36 PM, Mike Lawrence wrote:
 On 26-Jun-07, at 8:12 AM, Mike Lawrence wrote:
 Hi all,
 Hopefully this will be quick, I'm looking for pointers to packages/
 functions that would allow me to calculate the power of a t.test when
 the DV has measurement error. That is, I understand that, ceteris
 paribus, experiments using measure with more error (lower
 reliability) will have lower power.

 I came across a reference (http://memetic.ca/reliability.pdf) that
 provides a formula for calculating the noncentrality parameter for
 tests using imperfect measures (see Eq. 4), as well as a table of
 some resulting power estimates. However, while I have created a (very
 slow) monte carlo function that so far as I can tell matches their
 results, when I attempt to implement their analytic solution it's way
 off. Can anyone see what I'm doing incorrectly?

 n=100
 r=.5 #reliability
 e=.5 #effect size
 delta=(sqrt(r*n)/2)*e
 power.t.test(n,delta,sig.level=.05,alternative='one.sided')

   Two-sample t test power calculation

n = 100
delta = 1.767767
   sd = 1
sig.level = 0.05
power = 1
  alternative = one.sided

 NOTE: n is number in *each* group


 Meanwhile, their tables and my monte carlo method say that the power
 in that circumstance should be .7


Found it; I was using power.t.test without being thorough in reading  
its details. Sorry for the spam, and for anyone that's interested,  
here's the final analytic solution:

#get power for a t.test, incorporating measurement error.
#n = total number of participants across your 2 groups
#r = estimated reliability of the measure used
#e = measured effect size
get.power=function(n,r,e,tails=2,alpha=.05){
d=(sqrt(r*n)/2)*e
a=1-ifelse(tails==2,alpha/2,alpha)
p=1-pt(qt(a,n-2),n-2,d)
return(p)
}

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] how to create cumulative histogram from two independent variables?

2007-06-21 Thread Mike Lawrence

?plot.ecdf

plot.ecdf(rnorm(100),rexp(100))

On 20-Jun-07, at 5:02 PM, Jose Borreguero wrote:

 Hi all,
 I am extremely newbie to R. Can anybody jump-start me with any  
 clues as to
 how do I get a cumulative histogram from two independent variables,
 cumhist(X,Y) ?
 -jose

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://memetic.ca

Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fwd: How to set degrees of freedom in cor.test?

2007-06-16 Thread Mike Lawrence

You could calculate the confidence interval of the correlation at  
your desired df: http://davidmlane.com/hyperstat/B8544.html

The below code takes as arguments the observed correlation, N, and  
alpha, calculates the confidence interval and checks whether this  
includes 0.

cor.test2=function(r,n,a=.05){
phi=function(x){
log((1+x)/(1-x))/2
}
inv.phi=function(x){
(exp(2*x)-1)/(exp(2*x)+1)
}

r.prime=phi(r)
err=qnorm(1-(a/2))/sqrt(n-3)
lims=c(inv.phi(r.prime-err),inv.phi(r.prime+err))
sig=ifelse(xor(all(0lims),all(0lims)),T,F)
return(sig)
}

 On 14-Jun-07, at 5:40 AM, Florence Dufour wrote:


 Hello,

 I want to compute a correlation test but I do not want to use the
 degrees of freedom that are calculated by default but I want to set a
 particular number of degrees of freedom.
 I looked in the manual, different other functions but I did not found
 how to do it

 Thanks in advance for your answers

 Yours




 Florence Dufour
 PhD Student
 AZTI Tecnalia - Spain

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

 --
 Mike Lawrence
 Graduate Student, Department of Psychology, Dalhousie University

 Website: http://myweb.dal.ca/mc973993
 Public calendar: http://icalx.com/public/informavore/Public

 The road to wisdom? Well, it's plain and simple to express:
 Err and err and err again, but less and less and less.
   - Piet Hein


--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] question about formula for lm

2007-06-16 Thread Mike Lawrence
Yet another solution:

with(X,lm(get(Ytext)~Xvar))

On 14-Jun-07, at 5:18 PM, Greg Snow wrote:


 Try:

 lm( formula( paste( Ytext, '~ Xvar' ) ), data=X)

 --  
 Gregory (Greg) L. Snow Ph.D.
 Statistical Data Center
 Intermountain Healthcare
 [EMAIL PROTECTED]
 (801) 408-8111



 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Pedro Mardones
 Sent: Thursday, June 14, 2007 1:14 PM
 To: R-help@stat.math.ethz.ch
 Subject: [R] question about formula for lm

 Dear all;

 Is there any way to make this to work?:

 .x-rnorm(50,10,3)
 .y-.x+rnorm(50,0,1)

 X-data.frame(.x,.y)
 colnames(X)-c(Xvar,Yvar)

 Ytext-Yvar

 lm(Ytext~Xvar,data=X) # doesn't run

 lm(Yvar~Xvar,data=X) # does run

 The main idea is to use Ytext as input in a function, so you
 just type Yvar and the model should fit
 Also, I need to avoid the expression X$Yvar~X$Xvar

 Thanks for any idea

 PM

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] simultaneous computing

2007-06-11 Thread Mike Lawrence
There's RMPI
tutorial: http://ace.acadiau.ca/math/ACMMaC/Rmpi/index.html


On 11-Jun-07, at 9:11 AM, Markus Schmidberger wrote:

 Hello,

 which possibilities are available in R for simultaneous or parallel
 computing?
 I only could find biopara
 (http://cran.r-project.org/src/contrib/Descriptions/biopara.html)

 Are there other possibilities?
 Are there special groups working on simultaneous computing with R?

 Thanks
 Markus

 --  
 Dipl.-Tech. Math. Markus Schmidberger

 Ludwig-Maximilians-Universität München
 IBE - Institut für medizinische Informationsverarbeitung,
 Biometrie und Epidemiologie

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Aggregate to find majority level of a factor

2007-05-31 Thread Mike Lawrence
This should do the trick. Also labels ties with NA.

a=as.data.frame(cbind(c(1,1,1,2,2,2,3,3,3,4,4),c 
('big','big','small','big','small','small','small','small','small','big' 
,'small')))
a$V2=factor(a$V2)

maj=function(x){
y=table(x)
z=which.max(y)
if(sum(y==max(y))==1){
return(names(y)[z])
}else{
return(NA)
}
}

aggregate(a$V2,list(a$V1),maj)


On 31-May-07, at 4:25 PM, Thompson, Jonathan wrote:

 I want to use the aggregate function to summarize data by a factor (my
 field plots), but I want the summary to be the majority level of  
 another
 factor.


 For example, given the dataframe:

 Plot1 big
 Plot1 big
 Plot1 small
 Plot2 big
 Plot2 small
 Plot2 small
 Plot3 small
 Plot3 small
 Plot3 small


 My desired result would be:
 Plot1 big
 Plot2 small
 Plot3 small


 I can't seem to find a scalar function that will give me the majority
 level.

 Thanks in advance,

 Jonathan Thompson

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] test to compare significant correlation increase

2007-05-30 Thread Mike Lawrence
Just a guess (please correct if I'm way off on this), but maybe you  
could look at the difference in r betwen 1  2 and see if the  
confidence interval (http://davidmlane.com/hyperstat/B8544.html) for  
this value given your sample size includes 0.

On 30-May-07, at 11:24 AM, David Riano wrote:

 Hi!
 I am calculating correlation between two variables:
 1. X versus Y
 2. X versus Y(with a 3 steps lag)

 I would like to test if the correlation
 increase/decrease from 1 to 2 is significant or not.

 Is there any function in R to do this? any hints?

 Thanks for help :)

 David Riaño
 Center for Spatial Technologies and Remote Sensing (CSTARS)
 University of California
 250-N, The Barn
 One Shields Avenue
 Davis, CA 95616-8527
 USA
 1-(517) 629-5499
 http://www.cstars.ucdavis.edu/~driano/index.html
 http://www.cstars.ucdavis.edu

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] opinions please: text editors and reporting/Sweave?

2007-05-30 Thread Mike Lawrence
One minor warning regarding LaTeX: I have encountered journals in the  
psychological field (specifically, journals of the psychonomic  
society) that refuse to accept articles prepared in LaTeX, even if  
they are submitted as PDF. I'm a big LaTeX fan myself, so I really  
can't comprehend this.

On 30-May-07, at 6:07 PM, Greg Snow wrote:

 Tim,

 First, I personnally am a big fan of LaTeX, Emacs, and ESS and I think
 that in the long run you would benefit from learning all of them
 (probably start with Emacs, then ESS, then LaTeX once you already  
 have a
 knowledge of Emacs and how it can help).

 Since you asked about the simplest way to go, you may want to look at
 the odfWeave package.  This gives you the power of Sweave, but without
 having to learn LaTeX.  It works with documents in openoffice (a free
 office suite similar to and mostly compatible with microsoft office
 (word)).  Using this you can create your template using openoffice
 writer (or MS word, then convert using openoffice), run it through
 R/odfWeave, and have the result as another openoffice document that  
 can
 then be converted to MS word or pdf.

 Personally, if I am doing something for myself, or in which the output
 format does not matter, then I use Sweave with LaTeX (using Emacs and
 ESS).  But, often my results need to be sent to a client that will cut
 and past my results into an MS word document or power point
 presentation.  Then I find it easier to use openoffice and odfWeave  
 and
 have the end result be an MS word document that can be e-mailed to the
 client for them to mangle in ways they feel the need to.

 (there is also an HTML driver for Sweave I nthe the R2HTML package  
 where
 you can have the template file resemble html and the final output  
 is in
 html)

 Hope this helps,

 -- 
 Gregory (Greg) L. Snow Ph.D.
 Statistical Data Center
 Intermountain Healthcare
 [EMAIL PROTECTED]
 (801) 408-8111



 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Tim Howard
 Sent: Wednesday, May 30, 2007 2:43 PM
 To: r-help@stat.math.ethz.ch
 Subject: [R] opinions please: text editors and reporting/Sweave?

 dear all -

 I currently use Tinn-R as my text editor to work with code
 that I submit to R, with some output dumped to text files,
 some images dumped to pdf. (system: Windows 2K and XP, R
 2.4.1 and R 2.5). We are using R for overnight runs to create
 large output data files for GIS, but then I need simple
 output reports for analysis results for each separate data
 set. Thus, I create many reports of the same style, but just
 based on different input data.

 I am recognizing that I need a better reporting system, so
 that I can create clean reports for each separate R run. This
 obviously means using Sweave and some implementation of
 LaTex, both of which are new to me. I've installed MikTex and
 successfully completed a demo or two for creating pdfs from raw  
 LaTeX.

 It appears that if I want to ease my entry into the world of
 LaTeX, I might need to switch editors to something like Emacs
 (I read somewhere that Emacs helps with the TeX markup?).
 After quite a while wallowing at the Emacs site, I am finding
 that ESS is well integrated with R and might be the way to
 go. Aaaagh... I'm in way over my head!

 My questions:

 What, in your opinion, is the simplest way to integrate text
 and graphics reports into a single report such as a pdf file.

 If the answer to this is LaTeX and Sweave, is it difficult to
 use a text editor such as Tinn-R or would you strongly
 recommend I leave behind Tinn and move over to an editor that
 has more LaTeX help?

 In reading over Friedrich Leisch's Sweave User Manual (v
 1.6.0) I am beginning to think I can do everything I need
 with my simple editor. Before spending many hours going down
 that path, I thought it prudent to ask the R community.

 It is likely I am misunderstanding some of the process here
 and any clarifications are welcome.

 Thank you in advance for any thoughts.
 Tim Howard

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch

Re: [R] polygon error?

2007-05-26 Thread Mike Lawrence
polygon expects to be given the vertices, so if you want the area  
under the curve you'll want to include vertices at density=0

z - pretty(c(-3,3), 100)
ht - dnorm(z)
plot(z,ht, type=l)

zc - 1.645
ht-ht[zzc]
z-z[zzc]

ht-c(0,ht,0)
z-c(z[1],z,z[length(z)])

polygon(z,ht,col='red')


On 26-May-07, at 7:34 AM, LL wrote:

 Hi.. I'm not sure why polygon returns an area above the standard  
 normal curve.

 z - pretty(c(-3,3), 100)
 ht - dnorm(z)
 data - data.frame(z=z, ht=ht)
 zc - 1.645
 plot(data, type=l)
 lines(data)
 t - subset(data, zzc)
 polygon(t, col=red)

 Thanks,
 Lance


   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calculation of ratio distribution properties

2007-05-25 Thread Mike Lawrence

MathCad code failed to attach last time. Here it is.


On 25-May-07, at 2:24 PM, Mike Lawrence wrote:


I came across this reference:
http://www.informaworld.com/smpp/content? 
content=10.1080/03610920600683689


The authors sent me code (attached with permission) in MathCad to  
perform the calculations in which I'm interested. However, I do not  
have MathCad nor experience with its syntax, so I thought I'd send  
the code to the list to see if anyone with more experience with R  
and MathCad would be interested in making this code into a function  
or package of some sort


Mike

Begin forwarded message:


From: Noyan Turkkan [EMAIL PROTECTED]
Date: May 25, 2007 11:09:02 AM ADT
To: Mike Lawrence [EMAIL PROTECTED]
Subject: Rép. : R code for 'Density of the Ratio of Two Normal  
Random Variables'?


Hi Mike
I do not know if anyone coded my approach in R. However if you  
have acces to MathCad, I am including a MathCad file (also in  
PDF)  which computes the density of the ratio of 2 dependent  
normal variables, its mean  variance. If you do not have acces to  
MathCad, you will see that all the computations can be easily  
programmed in R, as I replaced Hypergeometric function by the Erf  
function. I am not very familiar with R but the erf function may  
be programmed as : (erf - function(x)  2*pnorm(x *sqrt(2)) - 1).  
Good luck.


Noyan Turkkan, ing.
Professeur titulaire  directeur / Professor  Head
Dépt. de génie civil / Civil Eng. Dept.
Faculté d'ingénierie / Faculty of Engineering
Université de Moncton
Moncton, N.B., Canada, E1A 3E9



 Mike Lawrence [EMAIL PROTECTED] 05/25/07 9:20 am 
Hi Dr. Turkkan,

I am working on a problem that necessitates the estimation of the
mean and variance of the density of two dependent normal random
variables and in my search for methods to achieve such estimation I
came across your paper 'Density of the Ratio of Two Normal Random
Variables' (2006). I'm not a statistics or math expert by any means,
but I am quite familiar with the R programming language; do you
happen to know whether anyone has coded your approach for R yet?

Cheers,

Mike

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

Ratio2depNV.pdf


--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting- 
guide.html

and provide commented, minimal, self-contained, reproducible code.


--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein


__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Calculation of ratio distribution properties

2007-05-25 Thread Mike Lawrence
According to the paper I cited, there is controversy over the  
sufficiency of Hinkley's solution, hence their proposed more complete  
solution.

On 25-May-07, at 2:45 PM, Lucke, Joseph F wrote:

 The exact ratio is given in

 On the Ratio of Two Correlated Normal Random Variables, D. V.  
 Hinkley, Biometrika, Vol. 56, No. 3. (Dec., 1969), pp. 635-639.


--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Calculation of ratio distribution properties

2007-05-24 Thread Mike Lawrence
Hi all,

Looking to calculate the expected mean and variance of a ratio  
distribution where the source distributions are gaussian with known  
parameters and sample values are correlated. I see (from wikipedia:  
http://en.wikipedia.org/wiki/ 
Ratio_distribution#Gaussian_ratio_distribution) that this calculation  
is quite involved, so I'm hoping that someone has already coded a  
function to achieve this.

Thanks,

Mike

--
Mike Lawrence
Graduate Student, Department of Psychology, Dalhousie University

Website: http://myweb.dal.ca/mc973993
Public calendar: http://icalx.com/public/informavore/Public

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Catenating factors.

2007-05-23 Thread Mike Lawrence
It may be poor form, but whenever I deal with factors in this way I  
find as.character and as.numeric useful

ex.

#if your factor levels are letters
z=factor(unique(c(as.character(x),as.character(y

#if your factor levels are numbers
z=factor(unique(c(as.numeric(as.character(x)),as.numeric(as.character 
(y)


On 22-May-07, at 11:56 PM, Marc Schwartz wrote:

 On Wed, 2007-05-23 at 13:38 +1200, Rolf Turner wrote:

 I was recently asked by one of new colleagues how to combine, or  
 catentate,
 or concatentate two factors.  If x and y are factors, doing c(x,y)  
 appears
 to coerce x and y to numeric mode before catenating them.  So what  
 does one
 do if one wants the result to be a factor whose levels are the  
 union of the
 levels of x and y?  I vaguely recall seeing this discussed in r- 
 help, but I
 can't find anything on it there, nor in the FAQ.

 Am I missing something obvious?

 Rolf,

 I think that this thread from last November on R-devel may be helpful:

 http://tolstoy.newcastle.edu.au/R/e2/devel/06/11/1130.html

 HTH,

 Marc Schwartz

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Dalhousie University Department of Psychology
http://myweb.dal.ca/mc973993

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Parallel processes

2007-05-22 Thread Mike Lawrence
You can find a decent tutorial on using RMPI here:
http://ace.acadiau.ca/math/ACMMaC/Rmpi/index.html

On 22-May-07, at 3:46 PM, Kuhn, Max wrote:

 Erin,

 There is a snow package (note the case) and also a few others.

 Rlsf, is specific to grids/clusters that use the LSF queue system.

 More generally, the nws package is more sophisticated and should  
 work on
 most systems. Also, there is Rmpi (which I haven't used).

 Max


 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of hodgess
 Sent: Tuesday, May 22, 2007 1:01 PM
 To: [EMAIL PROTECTED]
 Subject: [R] Parallel processes


 Dear R People:

  I was wondering if there were any packages for parallel  
 programming in
 R.

  According to the R-help, at one time there was a package called SNOW.
 It doesn't seem
  to exist anymore.

  Any help would be much appreciated!

  Sincerely,
  Erin Hodgess
  mailto: [EMAIL PROTECTED]


   [[alternative(swapped) HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

 --
 LEGAL NOTICE\ Unless expressly stated otherwise, this messag... 
 {{dropped}}

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Dalhousie University Department of Psychology
http://myweb.dal.ca/mc973993

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sequentially process a list

2007-05-21 Thread Mike Lawrence
Try nested ifelse() statements to label the group

ex.
time$group=ifelse(time$time5,NA,
ifelse(time$time9,1,
ifelse(time$time13,NA,
ifelse(time$time17,2,NA)
)
)
)

Then use aggregate to find the max value.

ex.
time.max=aggregate(time$value,list(group=time$group),max)


On 21-May-07, at 6:39 AM, Robert wrote:

 Hi dear R users,

 I'm a R beginner and I have a basic question about sequential  
 treatments of lists.

 I have a time based (i.e. events are consecutive) list of values of  
 a biological property.

 Like :

 time  value
 15
 210
 3  7
 4  10
 5  19
 6  21
 7  20
 8  18
 9  10
 10  7
 11  8
 12  12
 13  17
 14  19
 15  24
 16  18
 17  15
 18  10
 19  9
 [...]


 And I have to define a threshold and to attach each event to his  
 group, i.e. values upper the threshold.

 Like, for a threshold value of 17

 time  value   group
 15   NA
 210  NA
 3  7  NA
 4  10  NA
 5  19  1
 6  21  1
 7  20  1
 8  18  1
 9  10  NA
 10  7  NA
 11  8  NA
 12  12  NA
 13  17  2
 14  19  2
 15  24  2
 16  18  2
 17  15  NA
 18  10  NA
 19  9  NA
 [...]


 The only solution that I have found is to do a sequentially read  
 and write :
 for(i in 1:length(my_events_list))
 {
 [...]
 }

 But I very slow. Do you have another ideas ?

 And after I need to extract maximum values for each group
 Like :
 groupmax_value
 121
 224
 [...]

 and for each event which is part of a group to know if is't a  
 ascending phase or no.


 Yes, lot of questions !! Sorry, but I think that the solution may  
 be unique.

 In advance, thank you a lot

 regards

 JS








 __ 
 ___

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Dalhousie University Department of Psychology
http://myweb.dal.ca/mc973993

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] sequentially process a list

2007-05-21 Thread Mike Lawrence
Sorry, obviously misread your help request. Jim's solution is what  
you're looking for.

On 21-May-07, at 12:56 PM, Mike Lawrence wrote:

 Try nested ifelse() statements to label the group

 ex.
 time$group=ifelse(time$time5,NA,
   ifelse(time$time9,1,
   ifelse(time$time13,NA,
   ifelse(time$time17,2,NA)
   )
   )
 )

 Then use aggregate to find the max value.

 ex.
 time.max=aggregate(time$value,list(group=time$group),max)


 On 21-May-07, at 6:39 AM, Robert wrote:

 Hi dear R users,

 I'm a R beginner and I have a basic question about sequential
 treatments of lists.

 I have a time based (i.e. events are consecutive) list of values of
 a biological property.

 Like :

 time  value
 15
 210
 3  7
 4  10
 5  19
 6  21
 7  20
 8  18
 9  10
 10  7
 11  8
 12  12
 13  17
 14  19
 15  24
 16  18
 17  15
 18  10
 19  9
 [...]


 And I have to define a threshold and to attach each event to his
 group, i.e. values upper the threshold.

 Like, for a threshold value of 17

 time  value   group
 15   NA
 210  NA
 3  7  NA
 4  10  NA
 5  19  1
 6  21  1
 7  20  1
 8  18  1
 9  10  NA
 10  7  NA
 11  8  NA
 12  12  NA
 13  17  2
 14  19  2
 15  24  2
 16  18  2
 17  15  NA
 18  10  NA
 19  9  NA
 [...]


 The only solution that I have found is to do a sequentially read
 and write :
 for(i in 1:length(my_events_list))
 {
 [...]
 }

 But I very slow. Do you have another ideas ?

 And after I need to extract maximum values for each group
 Like :
 groupmax_value
 121
 224
 [...]

 and for each event which is part of a group to know if is't a
 ascending phase or no.


 Yes, lot of questions !! Sorry, but I think that the solution may
 be unique.

 In advance, thank you a lot

 regards

 JS








 _ 
 _
 ___

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

 --
 Mike Lawrence
 Graduate Student, Dalhousie University Department of Psychology
 http://myweb.dal.ca/mc973993

 The road to wisdom? Well, it's plain and simple to express:
 Err and err and err again, but less and less and less.
 - Piet Hein

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting- 
 guide.html
 and provide commented, minimal, self-contained, reproducible code.

--
Mike Lawrence
Graduate Student, Dalhousie University Department of Psychology
http://myweb.dal.ca/mc973993

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How does glm(family='binomial') deal with perfect sucess?

2007-03-20 Thread Mike Lawrence
Hi all,

Trying to understand the logistic regression performed by glm (i.e. when 
family='binomial'), and I'm curious to know how it treats perfect 
success. That is, lets say I have the following summary data

x=c(1,2,3,4,5,6)
y=c(0,.04,.26,.76,.94,1)
w=c(100,100,100,100,100,100)

where x is y is the probability of success at each value of x, 
calculated across w observations. When I use glm

my.glm.obj=glm(y~x,family='binomial',weights=w)

the regression comes out fine, but if I try what I understand to be the 
equivalent lm procedure (i.e. fitting a straight line to the logit 
transformed y values):

my.lm.obj=lm(qlogis(y)~x,weights=w)

I get an error because, of course, logit(1) = log(1/0) = log(Inf) = Inf
(similarly, logit(0) = log(0/1) = log(0) = -Inf).

I'd be very interested to see how glm deals with these extremes.

Cheers,

Mike


-- 
Mike Lawrence
http://artsweb.uwaterloo.ca/~m4lawren

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How to calculate area between ECDF and CDF?

2006-12-04 Thread Mike Lawrence
Hi all,

I'm working with data to which I'm fitting three-parameter weibull 
distributions (shape, scale  shift). The data are of low sample sizes 
(between 10 and 80 observations), so I'm reluctant to check my fits 
using chi-square (also, I'd like to avoid bin choice issues). I'd use 
the Kolmogorov-Smirnov test, but of course this is invalid when the 
distribution parameters are estimated from the data.

So I'm tinkering with an alternative method (excuse my naiveté if this 
is a bad idea, I'm a relative statistical novice) that calculates the 
area of the difference between the ECDF of the data and the CDF of the 
estimated function (somewhat like KS, which looks at the greatest 
distance between these). My thought is to compare this observed area to 
a distribution of simulated areas derived by monte carlo simulation 
(draw N random samples from the estimated function, calculating area, 
and repeat 1e5 times). If the observed area is greater than say 95% of 
the simulated areas, then I'd reject the fit.

My problem is that I can't figure out how to efficiently calculate the 
area between the ECDF and CDF functions. I can of course calculate the 
integral of each easily, and if one were consistently larger than the 
other simple subtraction of the integrals would yield the area between. 
However, when the functions cross, as frequently occurs, the solution 
seems much more complex. Any suggestions? Since as noted above I'll be 
doing the area calculation 1e5 times or so per test, a computationally 
frugal solution would be much appreciated!

Here's some code that I've been toying with:

#set up some true parameters
shape=2
scale=.5
shift=.3
n=10

#generate some observed data
obs=obs=rweibull(10,shape,scale)+shift

#lets say that the following are the estimated parameters from whatever 
estimation process I'm using
est.shape=1.9
est.scale=.6
est.shift=.35

#Calculate area between ECDF and CDF of the function defined by the
#estimated parameters
# ???
#The following would work if the ECDF were consistently higher or lower
#than the CDF

#Get the CDF area between 0 and some large number (here, 10 is pretty
#large)
cdf.area=integrate(pweibull,0,10,shape=est.shape,scale=est.scale)

#Get the ECDF area.
#first get rid of the shift in obs
obs=obs-est.shift
#calculate area by multiplying cumulative proportions by distance
#between knots, then summing
#add knot at 10 to match cdf
k=c(knots(ecdf(obs)),10)
ecdf.area=vector(numeric,(n-1))
for(i in 1:n){
ecdf.area[i]=(k[i+1]-k[i])*(sum(obs=k[i])/n)
}
ecdf.area=sum(ecdf.area)

#again, subtraction of the areas works if the ecdf is consistently lower
#than the cdf
diff=cdf.area-ecdf.area
#or consistently higher than the cdf
diff=ecdf.area-cdf.area
#but how to calculate when the functions cross?


Cheers,

Mike

-- 
Mike Lawrence
http://artsweb.uwaterloo.ca/~m4lawren

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] nnet() fit criteria

2006-12-03 Thread Mike Lawrence
Hi all,

I'm using nnet() for non-linear regression as in Ch8.10 of MASS. I 
understand that nnet() by default optimizes least squares. I'm looking 
to have it instead optimize such that the mean error is zero (so that it 
is unbiased). Any suggestions on how this might be achieved?

Cheers,

Mike

-- 
Mike Lawrence
http://artsweb.uwaterloo.ca/~m4lawren

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Translating R code + library into Fortran?

2006-09-11 Thread Mike Lawrence
,plot=F)
if(length(temp$counts[temp$counts0])10){
ok=T
}
}
#train nn.scale with error checking
ok=F
while(ok==F){

nn2.scale=nnet(exp.scale~min+q1+med+mean+q3+max+samples,data=training.data,size=8,linout=T,rang=1e-08,maxit=500,trace=F)
cor.scale=predict(nn2.scale,training.data[,c(1:7)],type=raw)
temp=hist(cor.scale,plot=F)
if(length(temp$counts[temp$counts0])10){
ok=T
}
}
#train nn.shape with error checking
ok=F
while(ok==F){

nn2.shape=nnet(exp.shape~min+q1+med+mean+q3+max+samples,data=training.data,size=8,linout=T,rang=1e-08,maxit=500,trace=F)
cor.shape=predict(nn2.shape,training.data[,c(1:7)],type=raw)
temp=hist(cor.shape,plot=F)
if(length(temp$counts[temp$counts0])10){
ok=T
}
}
#estimation iteration 2
cor.shift=predict(nn2.shift,obs.stats,type=raw)
cor.scale=predict(nn2.scale,obs.stats,type=raw)
cor.shape=predict(nn2.shape,obs.stats,type=raw)
#record error
ind.shift.err[,i]=cor.shift-stadler$exp.shift
ind.scale.err[,i]=cor.scale-stadler$exp.scale
ind.shape.err[,i]=cor.shape-stadler$exp.shape
group.shift.err[i]=mean(cor.shift)-mean(stadler$exp.shift)
group.scale.err[i]=mean(cor.scale)-mean(stadler$exp.scale)
group.shape.err[i]=mean(cor.shape)-mean(stadler$exp.shape)
}

results=as.data.frame(rbind(cbind(sd(c(ind.shift.err[,1:162])),sd(c(ind.scale.err[,1:162])),sd(c(ind.shape.err[,1:162]))),cbind(sd(group.shift.err[1:162]),sd(group.scale.err[1:162]),sd(group.shape.err[1:162]
results

-- 
Mike Lawrence
http://arts.uwaterloo.ca/~m4lawren

The road to wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] error bars in lattice xyplot *with groups*

2006-06-05 Thread Mike Lawrence
Hi all,

I'm trying to plot error bars in a lattice plot generated with xyplot. Deepayan
Sarkar has provided a very useful solution for simple circumstances
(https://stat.ethz.ch/pipermail/r-help/2005-October/081571.html), yet I am
having trouble getting it to work when the groups setting is enabled in
xyplot (i.e. multiple lines). To illustrate this, consider the singer data
generated by the above linked solution previously submitted:

#
library(lattice)
singer.split -
with(singer,
 split(height, voice.part))

singer.ucl -
sapply(singer.split,
   function(x) {
   st - boxplot.stats(x)
   c(st$stats[3], st$conf)
   })

singer.ucl - as.data.frame(t(singer.ucl))
names(singer.ucl) - c(median, lower, upper)
singer.ucl$voice.part -
factor(rownames(singer.ucl),
   levels = rownames(singer.ucl))

#now let's split up the voice.part factor into two factors,
singer.ucl$voice=factor(rep(c(1,2),4))
singer.ucl$range=factor(rep(c(Bass,Tenor,Alto,Soprano),each=2))

#here's Deepayan's previous solution, slightly modified to depict
#  the dependent variable (median) and the error bars on the y-axis
#  and the independent variable (voice.part) on the x-axis
prepanel.ci - function(x, y, ly, uy, subscripts, ...)
{
x - as.numeric(x)
ly - as.numeric(ly[subscripts])
uy - as.numeric(uy[subscripts])
list(ylim = range(y, uy, ly, finite = TRUE))
}
panel.ci - function(x, y, ly, uy, subscripts, pch = 16, ...)
{
x - as.numeric(x)
y - as.numeric(y)
ly - as.numeric(ly[subscripts])
uy - as.numeric(uy[subscripts])
panel.arrows(x, ly, x, uy, col = black,
 length = 0.25, unit = native,
 angle = 90, code = 3)
panel.xyplot(x, y, pch = pch, ...)
}


#this graph works
xyplot(median ~ voice.part,
data=singer.ucl,
ly = singer.ucl$lower,
uy = singer.ucl$upper,
prepanel = prepanel.ci,
panel = panel.ci,
type=b
)

#this one does not (it will plot, but will not seperate the groups)
xyplot(median ~ voice,
groups=range,
data=singer.ucl,
ly = singer.ucl$lower,
uy = singer.ucl$upper,
prepanel = prepanel.ci,
panel = panel.ci,
type=b
)



Any suggestions?

-- 

Mike Lawrence

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] Save output

2005-10-03 Thread Mike Lawrence
I think this is what you're looking for:

file_name_root = myfile

for(i in 1:10)

result- of_some_function

out_file-paste(file_name_root, i, .txt, sep = )

write.table(result, file = out_file)

}

the above code produces 10 files (myfile1.txt to myfile10.txt), each
containing whatever was calculated on that loop. If you have loops 
within loops
and you want to write during the innermost loop, remember to take this into
account like so:

file_name_root = myfile

for (i in 1:10)

for(q in 1:10)

result- of_some_function

out_file-paste(file_name_root, i, q, .txt, sep = )

write.table(result, file = out_file)

}

}

That way, when i iterates and when q resets to 1, you don't write 
over your
previous output.


Quoting Frank Schmid [EMAIL PROTECTED]:

 Dear R-Mastermind


 Within a while or a for-loop, is there a way that I can save to disk the
 results of the previous calculations at the end of each loop with filenames
 called file01.Rdata, file02.Rdata etc?

 So far, I have tried to write the outcome of each loop in a 3 dimensional
 array and saved it just once after the loop. Or is there another way so that
 I can keep the resulting matrix of every step in the loop?

 Thank you very much for your help


 Frank Schmid

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html




-- 

Mike Lawrence, BA(Hons)
Research Assistant to Dr. Gail Eskes
Dalhousie University  QEII Health Sciences Centre (Psychiatry)

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] What is Mandel's (Fitting) Test?

2005-10-02 Thread Mike Lawrence
Hello everyone,

A little background first:
I have collected psychophysical data from 12 participants, and each
participant's data is represented as a scatter plot (Percieved roughness versus
Physical roughness). I would like to know whether, on average, this data is
best fit by a linear function or by a quadratic function. (we have a priori
reasons to expect a quadratic)

Some of my colleagues have suggested the following methods of testing this:
1. For each participant, calculate the r-square values for linear and quadratic
fits, z-transform the resulting values. Collect these z-transformed scores and
then perform a dependent t-test across participants. If significant, then a
quadratic fits better.
2. For each participant, calculate the amount of variance left over from the
linear fit that is accounted for by the quadratic fit. Perform a one-sample
t-test to see if this population of scores differs from zero
3. Same as #2, but z-transform before performing the t-test.

However, I'm sure that these tests fail to take into account the fact that a
quadratic function will generally have an advantage over a linear function
simply by dint of having more terms to play with. So I've been looking for a
test that takes this advantage into account and I came across something called
the Mandel Test. It is available in the quantchem package, but the manual
contains a very meagre description of it's details (assumptions, etc).
Furthermore, besides biology/chemistry papers that reference it in passing,
I've been able to find only one reference online that addresses it's use
(http://www.econ.kuleuven.be/public/ndbae06/PDF-FILES/vanloco.doc), but even
then it lacks specificity.

So the question is, what is the Mandel Test? What are the assumptions and
limitations of the test? Does it sound appropriate for my purposes (if not, how
about the other tests suggested above)? How does it differ from the Lack-of-Fit
test?

Any help would be greatly appreciated.

Cheers,

Mike

-- 

Mike Lawrence, BA(Hons)
Research Assistant to Dr. Gail Eskes
Dalhousie University  QEII Health Sciences Centre (Psychiatry)

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] xyplot auto.key issue

2005-07-31 Thread Mike Lawrence
Hi Deepayan,

Thanks for the reply, but when I enter the  type = b  code into the 
auto.key
(see below) command I get the following message:

Error in valid.pch(x$pch) : zero-length 'pch'

Any suggestions?

xyplot(
#basic settings
bias ~ sample_size | measure,
data = bias,
groups = exp_tau,
type = b,
pch = c(1,2,3,4,5),
xlab = Sample Size,
ylab = Bias (ms),
#make strips transparent
strip = function(bg, ...) strip.default(bg = 'transparent', ...),
# tweak scales
scales=list(
x=list(
at = c(20, 40, 60),
tck = c(1,0),
alternating = F
),
y=list(
at = c(-50, -25, 0, 25, 50),
tck = c(1,0),
alternating = F
)
),
# tell key to match symbols to those used in the plot
par.settings = list(
superpose.symbol = list(
cex = .8,
pch = c(1,2,3,4,5)
)
),
# key settings
auto.key = list (
type = b,
lines = T,
border = T,
cex.title = 1.2,
title = Expected Tau,
text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
space = right
)
)


Quoting Deepayan Sarkar [EMAIL PROTECTED]:

 On 7/30/05, Mike Lawrence [EMAIL PROTECTED] wrote:
 Quick correction:

 The lines lines = T,  type = b in the par.settings section should
 not
 be there. They are remnants of my previous (failed) attempts at solving the
 problem. Below is the correct code:

 xyplot(
  #basic settings
  bias ~ sample_size | measure,
  data = bias,
  groups = exp_tau,
  type = b,
  pch = c(1,2,3,4,5),
  #make strips transparent
  strip = function(bg, ...) strip.default(bg = 'transparent', ...),
  # tweak scales
  scales=list(
  x=list(
  at = c(20, 40, 60),
  tck = c(1,0),
  alternating = F
  ),
  y=list(
  at = c(-50, -25, 0, 25, 50),
  tck = c(1,0),
  alternating = F
  )
  ),
  # tell key to match symbols to those used in the plot
  par.settings = list(
  superpose.symbol = list(
  cex = .8,
  pch = c(1,2,3,4,5)
  )
  ),
  # key settings
  auto.key = list (
  lines = T,
  size = 7,

 You seem to be missing a 'type=b' somewhere here. The type=b
 argument to xyplot is actually handled by the panel function. The key
 has type=l by default (see under 'key' in ?xyplot) and has to be
 changed explicitly.

  border = T,
  cex.title = 1.2,
  title = Expected Tau,
  text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
  space = right,
  )
 )



 Quoting Mike Lawrence [EMAIL PROTECTED]:

  Hi all,
 
  I'm having a problem with the auto.key function in xyplot. I hate to
  bother the
  list like this and I'm positive I must be missing something very simple,
 yet
  I've spent the last day searching for a solution to no avail.
 
  Essentially, I want a key that contains entries in which the plot points
 are
  superimposed on a line of the same color as the points, like this:
 o--o--o
  Now, given the presence of the default divide command, I assume this is
  simple; indeed, I get the impression that this representation is
  supposed to be
  produced automatically. Yet I can't seem to get it to work!
 
  Now, I've incorporated various other tweaks to my xyplot function, so I'm
  wondering if these tweaks are somehow hindering my efforts. The function
 is
  pasted below; I am making a 3x3 plot, each panel contains 5 lines and it
 is
  these lines that I want represented in the key. See the comments for
  descriptions of the modifications.
 
  Any help would be greatly appreciated.
 
  Cheers,
 
  Mike
 
 
  xyplot(
 #basic settings
 bias ~ sample_size | measure,
 data = bias,
 groups = exp_tau,
 type = b,
 pch = c(1,2,3,4,5),
 #make strips transparent
 strip = function(bg, ...) strip.default(bg = 'transparent', ...),
 # tweak scales
 scales=list(
 x=list(
 at = c(20, 40, 60),
 tck = c(1,0),
 alternating = F
 ),
 y=list(
 at = c(-50, -25, 0, 25, 50),
 tck = c(1,0),
 alternating = F
 )
 ),
 # tell key to match symbols to those used in the plot
 par.settings = list(
 superpose.symbol = list(
 cex = .8,
 pch = c

Re: [R] xyplot auto.key issue

2005-07-31 Thread Mike Lawrence
Hi again,

Deepayan, I tried adding a pch = c(1,2,3,4,5) line in the auto key but the
zero-length 'pch' error still occurs.

Sundar, I tried your code using key instead of auto.key, and after tweaking it
to fit my design (i.e. from x~y, groups = a to x~y | a, groups = b) it
works perfectly! Thanks!

Here's the code that finally worked (that is, the modifications to Sundar's
code, which might be clearer for others to follow than my own):

library(lattice)
set.seed(1)

z - expand.grid(x=1:10, b = LETTERS[1:5], a = LETTERS[10:18])
z$y - rnorm(nrow(z))

trellis.par.set(theme = col.whitebg())
par.line - trellis.par.get(superpose.line)
par.symb - trellis.par.get(superpose.symbol)
n - seq(nlevels(z$b))

my.key - list(
space = right,
border = TRUE,
cex.title = 1.2,
title = My Key,
size = 7,
lines = list(
pch = par.symb$pch[n],
lty = par.line$lty[n],
col = par.line$col[n],
type = b
),
text = list(
levels(z$b)
)
)

xyplot(
y ~ x | a,
data = z,
groups = b,
pch = par.symb$pch[n],
type = b,
key = my.key
)










Quoting Sundar Dorai-Raj [EMAIL PROTECTED]:

 Hi, Mike,

 Mike Lawrence wrote:
 Hi Deepayan,

 Thanks for the reply, but when I enter the  type = b  code into 
 the auto.key
 (see below) command I get the following message:

 Error in valid.pch(x$pch) : zero-length 'pch'

 Any suggestions?


 Why not just ignore auto.key and use key? Personally, I use auto.key 
 only when I want the defaults. If I want something more customized, 
 then I use key. As in,

 library(lattice)
 set.seed(1)
 z - expand.grid(x=1:10, g = LETTERS[1:5])
 z$y - rnorm(nrow(z))
 trellis.par.set(theme = col.whitebg())
 par.line - trellis.par.get(superpose.line)
 par.symb - trellis.par.get(superpose.symbol)
 n - seq(nlevels(z$g))
 my.key - list(space = right,
border = TRUE,
cex.title = 1.2,
title = My Key,
size = 7,
lines = list(pch = par.symb$pch[n],
   lty = par.line$lty[n],
   col = par.line$col[n],
   type = b),
text = list(levels(z$g)))
 xyplot(y ~ x, z, groups = g,
pch = par.symb$pch[n], type = b,
key = my.key)

 xyplot(
  #basic settings
  bias ~ sample_size | measure,
  data = bias,
  groups = exp_tau,
  type = b,
  pch = c(1,2,3,4,5),
  xlab = Sample Size,
  ylab = Bias (ms),
  #make strips transparent
  strip = function(bg, ...) strip.default(bg = 'transparent', ...),
  # tweak scales
  scales=list(
  x=list(
  at = c(20, 40, 60),
  tck = c(1,0),
  alternating = F
  ),
  y=list(
  at = c(-50, -25, 0, 25, 50),
  tck = c(1,0),
  alternating = F
  )
  ),
  # tell key to match symbols to those used in the plot
  par.settings = list(
  superpose.symbol = list(
  cex = .8,
  pch = c(1,2,3,4,5)
  )
  ),
  # key settings
  auto.key = list (
  type = b,
  lines = T,
  border = T,
  cex.title = 1.2,
  title = Expected Tau,
  text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
  space = right
  )
 )


 Quoting Deepayan Sarkar [EMAIL PROTECTED]:


 On 7/30/05, Mike Lawrence [EMAIL PROTECTED] wrote:

 Quick correction:

 The lines lines = T,  type = b in the par.settings section should
 not
 be there. They are remnants of my previous (failed) attempts at 
 solving the
 problem. Below is the correct code:

 xyplot(
#basic settings
bias ~ sample_size | measure,
data = bias,
groups = exp_tau,
type = b,
pch = c(1,2,3,4,5),
#make strips transparent
strip = function(bg, ...) strip.default(bg = 'transparent', ...),
# tweak scales
scales=list(
x=list(
at = c(20, 40, 60),
tck = c(1,0),
alternating = F
),
y=list(
at = c(-50, -25, 0, 25, 50),
tck = c(1,0),
alternating = F
)
),
# tell key to match symbols to those used in the plot
par.settings = list(
superpose.symbol = list(
cex = .8,
pch = c(1,2,3,4,5)
)
),
# key settings
auto.key = list (
lines = T,
size = 7,

 You seem to be missing a 'type=b' somewhere here. The type=b
 argument to xyplot is actually handled by the panel function. The key
 has type=l by default (see under 'key' in ?xyplot) and has to be
 changed explicitly.


border = T

Re: [R] How do you increase memeory?

2005-07-31 Thread Mike Lawrence
memory.limit(size = x)

where x is the desired memory limit in MB.



Quoting Briggs, Meredith M [EMAIL PROTECTED]:

 Hello


 Function memory.size() =435109888. How do I increase it by, say 30%?

 Thanks
 Meredith

   [[alternative HTML version deleted]]

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html




-- 

Mike Lawrence, BA(Hons)
Research Assistant to Dr. Gail Eskes
Dalhousie University  QEII Health Sciences Centre (Psychiatry)

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


[R] xyplot auto.key issue

2005-07-30 Thread Mike Lawrence
Hi all,

I'm having a problem with the auto.key function in xyplot. I hate to bother the
list like this and I'm positive I must be missing something very simple, yet
I've spent the last day searching for a solution to no avail.

Essentially, I want a key that contains entries in which the plot points are
superimposed on a line of the same color as the points, like this: o--o--o
Now, given the presence of the default divide command, I assume this is
simple; indeed, I get the impression that this representation is supposed to be
produced automatically. Yet I can't seem to get it to work!

Now, I've incorporated various other tweaks to my xyplot function, so I'm
wondering if these tweaks are somehow hindering my efforts. The function is
pasted below; I am making a 3x3 plot, each panel contains 5 lines and it is
these lines that I want represented in the key. See the comments for
descriptions of the modifications.

Any help would be greatly appreciated.

Cheers,

Mike


xyplot(
#basic settings
bias ~ sample_size | measure,
data = bias,
groups = exp_tau,
type = b,
pch = c(1,2,3,4,5),
#make strips transparent
strip = function(bg, ...) strip.default(bg = 'transparent', ...),
# tweak scales
scales=list(
x=list(
at = c(20, 40, 60),
tck = c(1,0),
alternating = F
),
y=list(
at = c(-50, -25, 0, 25, 50),
tck = c(1,0),
alternating = F
)
),
# tell key to match symbols to those used in the plot
par.settings = list(
superpose.symbol = list(
cex = .8,
pch = c(1,2,3,4,5)
),
lines = T,
type = b
),
# key settings
auto.key = list (
lines = T,
size = 7,
border = T,
cex.title = 1.2,
title = Expected Tau,
text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
space = right,
)
)



-- 

Mike Lawrence, BA(Hons)
Research Assistant to Dr. Gail Eskes
Dalhousie University  QEII Health Sciences Centre (Psychiatry)

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html


Re: [R] xyplot auto.key issue

2005-07-30 Thread Mike Lawrence
Quick correction:

The lines lines = T,  type = b in the par.settings section should not
be there. They are remnants of my previous (failed) attempts at solving the
problem. Below is the correct code:

xyplot(
#basic settings
bias ~ sample_size | measure,
data = bias,
groups = exp_tau,
type = b,
pch = c(1,2,3,4,5),
#make strips transparent
strip = function(bg, ...) strip.default(bg = 'transparent', ...),
# tweak scales
scales=list(
x=list(
at = c(20, 40, 60),
tck = c(1,0),
alternating = F
),
y=list(
at = c(-50, -25, 0, 25, 50),
tck = c(1,0),
alternating = F
)
),
# tell key to match symbols to those used in the plot
par.settings = list(
superpose.symbol = list(
cex = .8,
pch = c(1,2,3,4,5)
)
),
# key settings
auto.key = list (
lines = T,
size = 7,
border = T,
cex.title = 1.2,
title = Expected Tau,
text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
space = right,
)
)



Quoting Mike Lawrence [EMAIL PROTECTED]:

 Hi all,

 I'm having a problem with the auto.key function in xyplot. I hate to 
 bother the
 list like this and I'm positive I must be missing something very simple, yet
 I've spent the last day searching for a solution to no avail.

 Essentially, I want a key that contains entries in which the plot points are
 superimposed on a line of the same color as the points, like this: o--o--o
 Now, given the presence of the default divide command, I assume this is
 simple; indeed, I get the impression that this representation is 
 supposed to be
 produced automatically. Yet I can't seem to get it to work!

 Now, I've incorporated various other tweaks to my xyplot function, so I'm
 wondering if these tweaks are somehow hindering my efforts. The function is
 pasted below; I am making a 3x3 plot, each panel contains 5 lines and it is
 these lines that I want represented in the key. See the comments for
 descriptions of the modifications.

 Any help would be greatly appreciated.

 Cheers,

 Mike


 xyplot(
   #basic settings
   bias ~ sample_size | measure,
   data = bias,
   groups = exp_tau,
   type = b,
   pch = c(1,2,3,4,5),
   #make strips transparent
   strip = function(bg, ...) strip.default(bg = 'transparent', ...),
   # tweak scales
   scales=list(
   x=list(
   at = c(20, 40, 60),
   tck = c(1,0),
   alternating = F
   ),
   y=list(
   at = c(-50, -25, 0, 25, 50),
   tck = c(1,0),
   alternating = F
   )
   ),
   # tell key to match symbols to those used in the plot
   par.settings = list(
   superpose.symbol = list(
   cex = .8,
   pch = c(1,2,3,4,5)
   ),
   lines = T,
   type = b
   ),
   # key settings
   auto.key = list (
   lines = T,
   size = 7,
   border = T,
   cex.title = 1.2,
   title = Expected Tau,
   text = c(30 ms, 80 ms, 130 ms, 180 ms, 230 ms),
   space = right,
   )
 )



 --

 Mike Lawrence, BA(Hons)
 Research Assistant to Dr. Gail Eskes
 Dalhousie University  QEII Health Sciences Centre (Psychiatry)

 [EMAIL PROTECTED]

 The road to Wisdom? Well, it's plain and simple to express:
 Err and err and err again, but less and less and less.
 - Piet Hein

 __
 R-help@stat.math.ethz.ch mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html




-- 

Mike Lawrence, BA(Hons)
Research Assistant to Dr. Gail Eskes
Dalhousie University  QEII Health Sciences Centre (Psychiatry)

[EMAIL PROTECTED]

The road to Wisdom? Well, it's plain and simple to express:
Err and err and err again, but less and less and less.
- Piet Hein

__
R-help@stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html