[R] Generating random integers

2009-04-12 Thread skayis selcuk

   Dear R users,

   I need to generate random integer(s) in a range (say that beetween 1 to
   100) in R.

   Any help is deeply appreciated.

   Kind Regards

   Seyit Ali
   
   Dr. Seyit Ali KAYIS
   Selcuk University, Faculty of Agriculture
   Kampus/Konya, Turkey
   s_a_ka...@yahoo.com, s_a_ka...@hotmail.com
   Tel: +90 332 223 2830 Mobile: +90 535 587 1139
   Greetings from Konya, Turkey
   http://www.ziraat.selcuk.edu.tr/skayis/
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generating random integers

2009-04-12 Thread Daniel Nordlund
 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of skayis selcuk
 Sent: Saturday, April 11, 2009 11:24 PM
 To: r-help@r-project.org
 Subject: [R] Generating random integers
 
 
Dear R users,
 
I need to generate random integer(s) in a range (say that 
 beetween 1 to
100) in R.
 
Any help is deeply appreciated.
 
Kind Regards
 
Seyit Ali


Look at

?sample

Hope this is helpful,

Dan

Daniel Nordlund
Bothell, WA USA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] How may I add is.outer method

2009-04-12 Thread Grześ

Hello 
I have a problem like this:

 Glass$RI[is.outlier(Glass$RI)]
Error: could not find function is.outlier

Which commend I may add is.outer to my application?


-- 
View this message in context: 
http://www.nabble.com/How-may-I-add-%22is.outer%22-method-tp23006216p23006216.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Labeling points on plot on relative warp scores?

2009-04-12 Thread Jim Lemon

stephalope wrote:

Hi there,

I am plotting relative warp scores (equivalent to pca scores) and I want to
label (color code and shape) the points by group. I can't figure out how to
do this beyond simple plotting. 


plot(RW1, RW2);

Do I need to make vectors of each group and then plot them separately onto
the same plot? How do I go about this?
  

Hi stephalope,
I've done this with boxed.labels in the plotrix package for factor 
analyses or PCAs when there aren't many variables. I use strong 
background colors for the box and white text for the variable names. You 
may have to shift some labels apart if they overlap, but it gives an 
easy to understand illustration of simple factor structures or PCAs.


Jim

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Fedora 10 KDE plasma font rendering issue

2009-04-12 Thread Paul Bivand
I checked with my KDE 4.2 (Mandriva 2009 system, kde.org binaries)
with no font rendering issues in running demo(graphics).

It is quite possible that pango was not installed with Fedora (as it
is unnecessary for KDE 4.2 systems). Installing the relevant rpm may
fix things.

The packaging of the Fedora 2.8.1 rpm may need to take this into
account - if compiling, you'd have discovered at configure stage.

Paul Bivand

Paul Bivand

2009/4/1 Martyn Plummer plum...@iarc.fr:
 On Tue, 2009-03-31 at 18:36 -0700, dfermin wrote:
 Nope. I checked this. Both those fonts are installed.

 Well it is some kind of font rendering problem. The default device is
 the Cairo X11 device, which uses the Pango layout engine for font
 rendering.

 If you set the environment variable FC_DEBUG to 1 before launching your
 R session, you will get some debugging information. It is very verbose,
 but we only need to see this bit:

 First font Pattern has 15 elts (size 15)
        family: Nimbus Sans L(w)
        style: Regular(w)
        slant: 0(i)(w)
        weight: 80(i)(w)
        width: 100(i)(w)
        foundry: urw(w)
        file: /usr/share/fonts/default/Type1/n019003l.pfb(w)
        index: 0(i)(w)
        outline: FcTrue(w)
        scalable: FcTrue(w)




 Martyn Plummer-2 wrote:
 
  Quoting dfermin dfer...@umich.edu:
 
  Hello.
 
  I've got a new workstation running Fedora 10 linux and I use the KDE 4.2
  desktop which uses some kind of new desktop environment called 'plasma'.
 
  If I start up R and generate a plot (for example: hist(rnorm(1,
  mean=0,
  sd=1), breaks=100) ). The plot appears but all text (the x/y axes, title,
  etc..) is replaced by a square box. No font is rendered at all.
 
  Has anyone else got this problem? If so do you have a work around or a
  solution?
 
  I'm using R version 2.8.1 installed from the Fedora 10 repositories if
  that
  helps.
 
  Thanks in advance.
 
  It sounds like you are missing some fonts. Check that the urw-fonts and
  liberation-fonts RPMs are installed.


 ---
 This message and its attachments are strictly confidenti...{{dropped:8}}

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generating random integers

2009-04-12 Thread jim holtman
floor(runif(1000, 1,101))

On Sun, Apr 12, 2009 at 2:24 AM, skayis selcuk ska...@selcuk.edu.tr wrote:

   Dear R users,

   I need to generate random integer(s) in a range (say that beetween 1 to
   100) in R.

   Any help is deeply appreciated.

   Kind Regards

   Seyit Ali
   
   Dr. Seyit Ali KAYIS
   Selcuk University, Faculty of Agriculture
   Kampus/Konya, Turkey
   s_a_ka...@yahoo.com, s_a_ka...@hotmail.com
   Tel: +90 332 223 2830 Mobile: +90 535 587 1139
   Greetings from Konya, Turkey
   http://www.ziraat.selcuk.edu.tr/skayis/

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.





-- 
Jim Holtman
Cincinnati, OH
+1 513 646 9390

What is the problem that you are trying to solve?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] How may I add is.outer method

2009-04-12 Thread andrew
I am not quite sure how you define your outlier, but the definition
that I am familiar with is that an outlier is greater than 1.5*Inter
quartile range above the 3rd quartile or below the 1st quartile (this
is the default for the whiskers in boxplot).  This can be easily found
by

  x[(x  quantile(x)[2] - 1.5*IQR(x)) | (x  quantile(x)[4] + 1.5*IQR
(x))]

or if you prefer

  is.outlier - function(x) {(x  quantile(x)[2] - 1.5*IQR(x)) | (x 
quantile(x)[4] + 1.5*IQR(x))}
  x[is.outlier(x)]

HTH.

Andrew.


On Apr 12, 8:51 am, Grześ gregori...@gmail.com wrote:
 Hello
 I have a problem like this:

  Glass$RI[is.outlier(Glass$RI)]

 Error: could not find function is.outlier

 Which commend I may add is.outer to my application?

 --
 View this message in 
 context:http://www.nabble.com/How-may-I-add-%22is.outer%22-method-tp23006216p...
 Sent from the R help mailing list archive at Nabble.com.

 __
 r-h...@r-project.org mailing listhttps://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guidehttp://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with postscript (huge file size)

2009-04-12 Thread Talita Perciano
Sorry about some mistakes in the code. The correct one is:

 library(rimage)
 image
size:  458 x 372
type:  rgb
 laplacian_result - normalize(laplacian(image))
 postscript(laplacian_result.eps)
 plot.imagematrix(laplacian_result)
 dev.off()

Talita

2009/4/11 Talita Perciano talitaperci...@gmail.com

 Ok... I'm using the rimage package to manipulate an image. So, the image I
 have in R is of the type imagematrix, which is a matrix with the pixel
 values of the R, G anf B bands. What I'm doing is applying some operation
 (like laplacian filter for example) and plotting the result as an image:

  library(rimage)
  image
 size:  458 x 372
 type:  rgb
  laplacian_result - normalize(laplacian(image))
  postscript(laplacian_result)
  plot.imagematrix(laplacian)
  dev.off()


 Talita

 2009/4/11 Ben Bolker bol...@ufl.edu


  Do you mean you're importing jpegs or other bitmaps into
 R and writing them out (possibly with annotation etc.) as
 PostScript?
  Can you give a small example of some sort?  It would
 help for giving advice.




 Talita Perciano wrote:
  Thank you for the answer. Just to clear things out, I'm generating plots
 of rgb images.
 
  Talita
 
  2009/4/11 Ben Bolker bol...@ufl.edumailto:bol...@ufl.edu
 
 
  Talita Perciano wrote:
  Dear users,
 
  I'm generating some images in R to put into a document that I'm
 producing
  using Latex. This document in Latex is following a predefined model,
 which
  does not accept compilation with pdflatex, so I have to compile with
 latex
  - dvi - pdf. Because of that, I have to generate the images in R with
  postscript (I want a vector format to keep the quality). The problem is
  that
  the files of the images are very huge (10MB) and I have many images to
 put
  into the pdf document.
  I want to know if there is a way to reduce the size of those images
  generated by R using postscript.
 
  Thank you in advance,
 
  Talita
 
 
 
   Not in any extremely easy way.  The fundamental problem is
  that if you have a whole lot of points in your graph, it's hard
  to make them take less file space even if they're overplotted
  (and hence not visible in the actual image).
   This has been discussed in various forms on the R list in the past,
  but I can't locate those posts easily.  It's a little hard without
 knowing
  what kind of plot you're generating, but I'm assuming that you have
  many, many points or lines in the graphic (or a very high-resolution
  image plot), and that the details don't all show up in the figure
 anyway.
  A few general strategies:
 
   * thin the points down to a random subset
   * use a 2D density plot or hexagonal binning
   * create a  bitmap (PNG) plot, then use image
  manipulation tools (ImageMagick etc.) to convert that back to
  a PostScript file
   * there was some discussion earlier about whether one
  could embed a bitmap of just the internals of the plot, leaving
  the axes, labels etc. in vector format, but I don't think that
  came to anything
 
   good luck
Ben Bolker
 
  --
  View this message in context:
 http://www.nabble.com/Help-with-postscript-%28huge-file-size%29-tp23003428p23004309.html
  Sent from the R help mailing list archive at Nabble.com.
 
  __
  R-help@r-project.orgmailto:R-help@r-project.org mailing list
  https://stat.ethz.ch/mailman/listinfo/r-help
  PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
  and provide commented, minimal, self-contained, reproducible code.
 
 
 
  --
  Talita Perciano
  Instituto de Matemática e Estatísitca
  Universidade de São Paulo - USP
  PhD Student in Computer Science
  São Paulo, SP, Brazil
  Tel: +55 11 8826 7092
 
  Success is not final, failure is not fatal: it is the courage to
 continue that counts.
  (Winston Churchill)
 


 --
 Ben Bolker
 Associate professor, Biology Dep't, Univ. of Florida
 bol...@ufl.edu / www.zoology.ufl.edu/bolker
 GPG key: www.zoology.ufl.edu/bolker/benbolker-publickey.asc




 --
 Talita Perciano
 Instituto de Matemática e Estatísitca
 Universidade de São Paulo - USP
 PhD Student in Computer Science
 São Paulo, SP, Brazil
 Tel: +55 11 8826 7092

 Success is not final, failure is not fatal: it is the courage to continue
 that counts.
 (Winston Churchill)




-- 
Talita Perciano
Instituto de Matemática e Estatísitca
Universidade de São Paulo - USP
PhD Student in Computer Science
São Paulo, SP, Brazil
Tel: +55 11 8826 7092

Success is not final, failure is not fatal: it is the courage to continue
that counts.
(Winston Churchill)

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Help with postscript (huge file size)

2009-04-12 Thread hadley wickham
 I'm generating some images in R to put into a document that I'm producing
 using Latex. This document in Latex is following a predefined model, which
 does not accept compilation with pdflatex, so I have to compile with latex
 - dvi - pdf. Because of that, I have to generate the images in R with
 postscript (I want a vector format to keep the quality). The problem is that
 the files of the images are very huge (10MB) and I have many images to put
 into the pdf document.
 I want to know if there is a way to reduce the size of those images
 generated by R using postscript.

Just use a high-resolution png or tiff.  At 300 dpi you won't be able
to tell the difference when it's printed.

Hadley

-- 
http://had.co.nz/

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] lmer overdispersion

2009-04-12 Thread Jonathan Williams
I got a similar problem when I used family=quasibinomial with my data. But, the 
problem disappeared when I used family=binomial. I assumed that Douglas Bates 
et al. had amended the lmer program to detect over-dispersion, so that it is no 
longer necessary to specify its possible presence with family=quasi... But, I 
may be wrong. If you get more information about this from the great man, then 
would you please let me know?

Thanks,

Jonathan Williams

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Bug in col2rgb?

2009-04-12 Thread Duncan Murdoch

On 31/03/2009 12:53 PM, Duncan Murdoch wrote:

On 3/31/2009 12:29 PM, hadley wickham wrote:

col2rgb(#0079, TRUE)

  [,1]
red  0
green0
blue 0
alpha  121

col2rgb(#0080, TRUE)

  [,1]
red255
green  255
blue   255
alpha0

col2rgb(#0081, TRUE)

  [,1]
red  0
green0
blue 0
alpha  129


Any ideas?


The #0080 string converts into the hex integer 0x8000, which, by 
an unfortunate coincidence, is the NA_integer value.  Since NA_integer 
becomes the background colour, you get white instead of black with 
alpha=0x80.


This can probably be fixed (if you have a string, there's a different 
way to know you have an NA).  Want to work out a patch?  The file to 
look at is src/main/colors.c.


I've added a fix for this to R-devel.  After 2.9.0 is released, I'll 
move it into R-patched.


Duncan Murdoch

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Running random forest using different training and testing schemes

2009-04-12 Thread Chrysanthi A.
Hi,

I would like to run random Forest classification algorithm and check the
accuracy of the prediction according to different training and testing
schemes. For example, extracting 70% of the samples for training and the
rest for testing, or using 10-fold cross validation scheme.
How can I do that? Is there a function?

Thanks a lot,

Chrysanthi.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re : Running random forest using different training and testing schemes

2009-04-12 Thread Pierre Moffard
Hi Chysanthi,

check out the randomForest package, with the function randomForest. It has a CV 
option. Sorry for not providing you with a lengthier response at the moment but 
I'm rather busy on a project. Let me know if you need more help.

Also, to split your data into two parts- the training and the test set you can 
do (n the number of data points):
n-length(data[,1])
indices-sample(rep(c(TRUE,FALSE),each=n/2),round(n/2),replace=TRUE)
training_indices-(1:n)[indices]
test_indices-(1:n)[!indices]

Then, data[train,] is the training set and data[test,] is the test set.

Best,
Pierre



De : Chrysanthi A. chrys...@gmail.com
À : r-help@r-project.org
Envoyé le : Dimanche, 12 Avril 2009, 17h26mn 59s
Objet : [R] Running random forest using different training and testing schemes

Hi,

I would like to run random Forest classification algorithm and check the
accuracy of the prediction according to different training and testing
schemes. For example, extracting 70% of the samples for training and the
rest for testing, or using 10-fold cross validation scheme.
How can I do that? Is there a function?

Thanks a lot,

Chrysanthi.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Re : Running random forest using different training and testing schemes

2009-04-12 Thread Pierre Moffard


you need to include in your code something like:

tree-rpart(result~., data, control=rpart.control(xval=10)).

this xval=10 is 10-fold CV.

Best,
Pierre



De : Chrysanthi A. chrys...@gmail.com
À : r-help@r-project.org
Envoyé le : Dimanche, 12 Avril 2009, 17h26mn 59s
Objet : [R] Running random forest using different training and testing schemes

Hi,

I would like to run random Forest classification algorithm and check the
accuracy of the prediction according to different training and testing
schemes. For example, extracting 70% of the samples for training and the
rest for testing, or using 10-fold cross validation scheme.
How can I do that? Is there a function?

Thanks a lot,

Chrysanthi.

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



  
[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Generating random integers

2009-04-12 Thread Paul Smith
On Sun, Apr 12, 2009 at 1:21 PM, jim holtman jholt...@gmail.com wrote:
 floor(runif(1000, 1,101))

   I need to generate random integer(s) in a range (say that beetween 1 to
   100) in R.

Another way:

sample(1:100,1000,replace=T)

Paul

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Convert string to time

2009-04-12 Thread Peter Kraglund Jacobsen
One variable contains values (1.30 - one hour and thirty minutes, 1.2
(which is supposed to be 1.20 - one hour and twenty minutes)). I would
like to convert to a minute variable so 1.2 is converted to 80
minutes. How?

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] goodness of fit between two samples of size N (discrete variable)

2009-04-12 Thread jose romero

Hello list:

I generate by simulation (using different procedures) two sample vectors of 
size N, each corresponding to a discrete variable and I want to text if these 
samples can be considered as having the same probability distribution (which is 
unknown).  What is the best test for that? 
I've read that Kolmogorov-Smirnov and Anderson-Darling tests are restricted to 
continuous data 
(http://cran.r-project.org/doc/contrib/Ricci-distributions-en.pdf), while 
chi-square can handle discrete data, but how do i test (in R) equivalence of 
ditribution in 2 samples using it? Are there better tests than those i 
mentioned?

Thanks and regards,
jlrp

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Convert string to time

2009-04-12 Thread Dirk Eddelbuettel

On 12 April 2009 at 21:00, Peter Kraglund Jacobsen wrote:
| One variable contains values (1.30 - one hour and thirty minutes, 1.2
| (which is supposed to be 1.20 - one hour and twenty minutes)). I would
| like to convert to a minute variable so 1.2 is converted to 80
| minutes. How?

The littler sources have a script pace.r (see below) that does some simple
transformations so that it can come up with min/mile and min/km expression
given a distance (in miles) and a time (in fractional minutes) as 'read' --
eg in the example below 76.02 stands for 1 hour 16 minutes and 2 seconds.

In a nutshell, you divide and keep score of remainders.

There may be more compact ways to do this---pace.r is a pretty linear
translation of an earlier Octave script pace.m.

Hth, Dirk

e...@ron:~ pace.r 10.20 76.02
Miles: 10.2
Time : 76.02
Pace/m   :  7 min 27.25 sec
Pace/km  :  4 min 37.91 sec
e...@ron:~
e...@ron:~ cat bin/pace.r
#!/usr/bin/env r
#
# a simple example to convert miles and times into a pace
# where the convention is that we write e.g. 37 min 15 secs
# as 37.15 -- so a call 'pace.r 4.5 37.15' yields a pace of
# 8.1667, ie 8 mins 16.67 secs per mile

if (is.null(argv) | length(argv)!=2) {

  cat(Usage: pace.r miles time\n)
  q()

}

dig - 6

rundist - as.numeric(argv[1])
runtime - as.numeric(argv[2])

cat(Miles\t :, format(rundist, digits=dig), \n)
cat(Time\t :, format(runtime, digits=dig), \n)

totalseconds - floor(runtime)*60 + (runtime-floor(runtime))*100
totalsecondspermile - totalseconds / rundist
minutespermile - floor(totalsecondspermile/60)
secondspermile - totalsecondspermile - minutespermile*60

totalsecondsperkm - totalseconds / (rundist * 1.609344)
minutesperkm - floor(totalsecondsperkm/60)
secondsperkm - totalsecondsperkm - minutesperkm*60

pace   - minutespermile + secondspermile/100
pacekm - minutesperkm + secondsperkm/100

cat(sprintf(Pace/m\t : % 2.0f min %05.2f sec\n,
minutespermile, secondspermile))
cat(sprintf(Pace/km\t : % 2.0f min %05.2f sec\n,
minutesperkm, secondsperkm))
e...@ron:~



| __
| R-help@r-project.org mailing list
| https://stat.ethz.ch/mailman/listinfo/r-help
| PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
| and provide commented, minimal, self-contained, reproducible code.

-- 
Three out of two people have difficulties with fractions.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] p-values from bootstrap - what am I not understanding?

2009-04-12 Thread Johan Jackson
Dear stats experts:
Me and my little brain must be missing something regarding bootstrapping. I
understand how to get a 95%CI and how to hypothesis test using bootstrapping
(e.g., reject or not the null). However, I'd also like to get a p-value from
it, and to me this seems simple, but it seems no-one does what I would like
to do to get a p-value, which suggests I'm not understanding something.
Rather, it seems that when people want a p-value using resampling methods,
they immediately jump to permutation testing (e.g., destroying dependencies
so as to create a null distribution). SO - here's my thought on getting a
p-value by bootstrapping. Could someone tell me what is wrong with my
approach? Thanks:

STEPS TO GETTING P-VALUES FROM BOOTSTRAPPING - PROBABLY WRONG:

1) sample B times with replacement, figure out theta* (your statistic of
interest). B is large ( 1000)

2) get the distribution of theta*

3) the mean of theta* is generally near your observed theta. In the same way
that we use non-centrality parameters in other situations, move the
distribution of theta* such that the distribution is centered around the
value corresponding to your null hypothesis (e.g., make the distribution
have a mean theta = 0)

4) Two methods for finding 2-tailed p-values (assuming here that your
observed theta is above the null value):
Method 1: find the percent of recentered theta*'s that are above your
observed theta. p-value = 2 * this percent
Method 2: find the percent of recentered theta*'s that are above the
absolute value of your observed value. This is your p-value.

So this seems simple. But I can't find people discussing this. So I'm
thinking I'm wrong. Could someone explain where I've gone wrong?


J Jackson

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running random forest using different training and testing schemes

2009-04-12 Thread Max Kuhn
There is also the train function in the caret package. The  
trainControl function can be used to try different resampling schemes.  
There is also a package vignette with details.


Max


On Apr 12, 2009, at 12:26 PM, Chrysanthi A. chrys...@gmail.com  
wrote:



Hi,

I would like to run random Forest classification algorithm and check  
the

accuracy of the prediction according to different training and testing
schemes. For example, extracting 70% of the samples for training and  
the

rest for testing, or using 10-fold cross validation scheme.
How can I do that? Is there a function?

Thanks a lot,

Chrysanthi.

   [[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Problem with Loop and overwritten results

2009-04-12 Thread unbekannt


 Dear all,

 I am a newbie to R and practising at the moment.

 Here is my problem:

 I have a programme with 2 loops involved.
 The inner loop get me matrices as output and safes all values for me.

 Now once I wrote a 2nd loop around the other loop in order to
 repeat the inner loop a couple of times, the results are overwritten and
 i found no way how to actually put the output results in a vector.

 I can receive single results only, like this

 [1] number
 [1] number2


but it want it rather like this

[1] number number2 number3 ...

any advice?

Thanks
unbekannter weise

 
-- 
View this message in context: 
http://www.nabble.com/Problem-with-Loop-and-overwritten-results-tp23013391p23013391.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] First Derivative of Data Matrix

2009-04-12 Thread thaumaturgy

I am really new to R and ran across a need to take a data matrix and
calculate an approximation of the first derivative of the data.  I am more
than happy to do an Excel kind of calculation (deltaY/deltaX) for each
pair of rows down the matrix, but I don't know how to get R to do that kind
of calculation.  I'd like to store it as a 3rd column in the matrix as well.

My data looks like this:
 acflong
1  1.000
2  0.9875858
3  0.9871751
4  0.9867585
5  0.9863358
6  0.9859070
7  0.9854721
8  0.9850316
9  0.9817161
10 0.9812650

and I'd like to generate a table like this:

 acflong  dacflong/dx
1  1.000
2  0.9875858-0.01241  #delta(acflong)/delta(index)
3  0.9871751-0.00041
4  0.9867585-0.00042
5  0.9863358-0.00042
6  0.9859070-0.00043
7  0.9854721-0.00043
8  0.9850316-0.00044
9  0.9817161-0.00033
10 0.9812650   -0.00045

Is there a way to do this in R and how do I eliminate the first line of the
data?

Thanks,
-Chris
-- 
View this message in context: 
http://www.nabble.com/First-Derivative-of-Data-Matrix-tp23012026p23012026.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Generalised Rejection Sampling

2009-04-12 Thread Mary Winter

Hi,

 

I am trying to figure out the observed acceptance rate and M, using generalised 
rejection sampling to generate a sample from the posterior distribution for p.

 

I have been told my code doesn't work because I need to  take the log of the 
expression for M, evaluate it and then exponentiate the result. This is 
because R is unable to calculate high powers such as 545.501.

 

As you can see in my code I have tried to taking the log of M and then the 
exponential of the result, but I clearly must be doing something wrong. 

I keep getting the error message:

 

Error in if (U = ratio/exp(M)) { : missing value where TRUE/FALSE needed

 

Any ideas how I go about correctly taking the log and then the exponential?

 

rvonmises.norm - function(n,alpha,beta) {
out - rep(0,n)
counter - 0
total.sim - 0
p-alpha/(alpha+beta)
M 
-logalpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha-2)^(alpha+beta-2)))
while(counter  n) {
total.sim - total.sim+1
proposal - runif(1)
if(proposal = 0  proposal = 1) {
U - runif(1)
ratio- (p^(alpha-1))*((1-p)^(beta-1))
if(U =ratio/exp(M)) {
counter - counter+1
out[counter] - proposal
}
}
}
obs.acc.rate - n/total.sim
return(out,obs.acc.rate,M)
}

set.seed(220)
temp - rvonmises.norm(1,439.544,545.501)
print(temp$obs.acc.rate)

 

Louisa

_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] taking the log then later exponentiate the result query

2009-04-12 Thread Mary Winter


 Hi,
 
I am trying to figure out the observed acceptance rate and M, using generalised 
rejection sampling to generate a sample from the posterior distribution for p.
 
I have been told my code doesn't work because I need to  take the log of the 
expression for M, evaluate it and then exponentiate the result. This is 
because R is unable to calculate high powers such as 545.501.
 
As you can see in my code I have tried to taking the log of M and then the 
exponential of the result, but I clearly must be doing something wrong. 
I keep getting the error message:
 
Error in if (U = ratio/exp(M)) { : missing value where TRUE/FALSE needed
 
Any ideas how I go about correctly taking the log and then the exponential?
 
rvonmises.norm - function(n,alpha,beta) {
out - rep(0,n)
counter - 0
total.sim - 0
p-alpha/(alpha+beta)
M 
-logalpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha-2)^(alpha+beta-2)))
while(counter  n) {
total.sim - total.sim+1
proposal - runif(1)
if(proposal = 0  proposal = 1) {
U - runif(1)
ratio- (p^(alpha-1))*((1-p)^(beta-1))
if(U =ratio/exp(M)) {
counter - counter+1
out[counter] - proposal
}
}
}
obs.acc.rate - n/total.sim
return(out,obs.acc.rate,M)
}
set.seed(220)
temp - rvonmises.norm(1,439.544,545.501)
print(temp$obs.acc.rate)
 
Louisa



Get the New Internet Explore 8 Optimised for MSN. Download Now

_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Problem with Loop and overwritten results

2009-04-12 Thread David Winsemius

Dear unbekannt;

The construction that would append a number to a numeric vector would  
be:


vec - c(vec , number)

You can create an empty vector with vec - c() or vec - NULL

--
David Winsemius

On Apr 12, 2009, at 2:10 PM, unbekannt wrote:




Dear all,

I am a newbie to R and practising at the moment.

Here is my problem:

I have a programme with 2 loops involved.
The inner loop get me matrices as output and safes all values for me.

Now once I wrote a 2nd loop around the other loop in order to
repeat the inner loop a couple of times, the results are overwritten  
and

i found no way how to actually put the output results in a vector.

I can receive single results only, like this

[1] number
[1] number2


but it want it rather like this

[1] number number2 number3 ...

any advice?

Thanks
unbekannter weise


--
View this message in context: 
http://www.nabble.com/Problem-with-Loop-and-overwritten-results-tp23013391p23013391.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] taking the log then later exponentiate the result query

2009-04-12 Thread Daniel Nordlund
 -Original Message-
 From: r-help-boun...@r-project.org 
 [mailto:r-help-boun...@r-project.org] On Behalf Of Mary Winter
 Sent: Sunday, April 12, 2009 1:39 PM
 To: r-help@r-project.org
 Subject: [R] taking the log then later exponentiate the result query
 
 
 
  Hi,
  
 I am trying to figure out the observed acceptance rate and M, 
 using generalised rejection sampling to generate a sample 
 from the posterior distribution for p.
  
 I have been told my code doesn't work because I need to  
 take the log of the expression for M, evaluate it and then 
 exponentiate the result. This is because R is unable to 
 calculate high powers such as 545.501.
  
 As you can see in my code I have tried to taking the log of M 
 and then the exponential of the result, but I clearly must be 
 doing something wrong. 
 I keep getting the error message:
  
 Error in if (U = ratio/exp(M)) { : missing value where 
 TRUE/FALSE needed
  
 Any ideas how I go about correctly taking the log and then 
 the exponential?
  
 rvonmises.norm - function(n,alpha,beta) {
 out - rep(0,n)
 counter - 0
 total.sim - 0
 p-alpha/(alpha+beta)
 M 
 -logalpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha
 -2)^(alpha+beta-2)))
 while(counter  n) {
 total.sim - total.sim+1
 proposal - runif(1)
 if(proposal = 0  proposal = 1) {
 U - runif(1)
 ratio- (p^(alpha-1))*((1-p)^(beta-1))
 if(U =ratio/exp(M)) {
 counter - counter+1
 out[counter] - proposal
 }
 }
 }
 obs.acc.rate - n/total.sim
 return(out,obs.acc.rate,M)
 }
 set.seed(220)
 temp - rvonmises.norm(1,439.544,545.501)
 print(temp$obs.acc.rate)
  
 Louisa
 
 

I think when someone  told you to take the log of the calculation, they
meant for you to simplify the logarithmic calculation algebraically so that
you are not exponentiating large numbers.  Try changing your calculation of
M to (I think this right)

M - (alpha-1)*log(alpha-1) + (beta-1)*log(beta-1) -
(alpha+beta-2)*log(alpha+beta-2)

Hope this is helpful,

Dan

Daniel Nordlund
Bothell, WA USA

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] p-values from bootstrap - what am I not understanding?

2009-04-12 Thread Peter Dalgaard

Johan Jackson wrote:

Dear stats experts:
Me and my little brain must be missing something regarding bootstrapping. I
understand how to get a 95%CI and how to hypothesis test using bootstrapping
(e.g., reject or not the null). However, I'd also like to get a p-value from
it, and to me this seems simple, but it seems no-one does what I would like
to do to get a p-value, which suggests I'm not understanding something.
Rather, it seems that when people want a p-value using resampling methods,
they immediately jump to permutation testing (e.g., destroying dependencies
so as to create a null distribution). SO - here's my thought on getting a
p-value by bootstrapping. Could someone tell me what is wrong with my
approach? Thanks:

STEPS TO GETTING P-VALUES FROM BOOTSTRAPPING - PROBABLY WRONG:

1) sample B times with replacement, figure out theta* (your statistic of
interest). B is large ( 1000)

2) get the distribution of theta*

3) the mean of theta* is generally near your observed theta. In the same way
that we use non-centrality parameters in other situations, move the
distribution of theta* such that the distribution is centered around the
value corresponding to your null hypothesis (e.g., make the distribution
have a mean theta = 0)

4) Two methods for finding 2-tailed p-values (assuming here that your
observed theta is above the null value):
Method 1: find the percent of recentered theta*'s that are above your
observed theta. p-value = 2 * this percent
Method 2: find the percent of recentered theta*'s that are above the
absolute value of your observed value. This is your p-value.

So this seems simple. But I can't find people discussing this. So I'm
thinking I'm wrong. Could someone explain where I've gone wrong?



There's nothing particularly wrong about this line of reasoning, or at 
least not (much) worse than the calculation of CI. After all, one 
definition of a CI at level 1-alpha is that it contains values of theta0 
for which the hypothesis theta=theta0 is accepted at level alpha. (Not 
the only possible definition, though.)


The crucial bit in both cases is the assumption of approximate 
translation invariance, which holds asymptotically, but maybe not well 
enough in small samples.


There are some braintwisters connected with the bootstrap; e.g., if the 
bootstrap distribution is skewed to the right, should the CI be skewed 
to the right or to the left? The answer is that it cannot be decided 
based on the distribution of theta* alone since that depends only on the 
true theta, and we need to know what the distribution would have been 
had a different theta been the true one.


The point is that these things get tricky, so most people head for the 
safe haven of permutation testing, where it is rather more easy to feel 
that you know what you are doing.


For a rather different approach, you might want to look into the theory 
of empirical likelihood (book by Art Owen, or just Google it).


--
   O__   Peter Dalgaard Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics PO Box 2099, 1014 Cph. K
 (*) \(*) -- University of Copenhagen   Denmark  Ph:  (+45) 35327918
~~ - (p.dalga...@biostat.ku.dk)  FAX: (+45) 35327907

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] First Derivative of Data Matrix

2009-04-12 Thread David Winsemius
delta(index) is identically 1, so taking first differences is all that  
is needed. If the dtatframe's name is df then:


df$dacflong_dx - c(NA, diff(acflong)) # the slash would not be a  
legal character in a variable name unless you jumped through some  
hoops that appear entirely without value


If you want to get rid of the first line of df then

df[-1]

--
David Winsemius


On Apr 12, 2009, at 11:55 AM, thaumaturgy wrote:



I am really new to R and ran across a need to take a data matrix and
calculate an approximation of the first derivative of the data.  I  
am more
than happy to do an Excel kind of calculation (deltaY/deltaX) for  
each
pair of rows down the matrix, but I don't know how to get R to do  
that kind
of calculation.  I'd like to store it as a 3rd column in the matrix  
as well.


My data looks like this:
acflong
1  1.000
2  0.9875858
3  0.9871751
4  0.9867585
5  0.9863358
6  0.9859070
7  0.9854721
8  0.9850316
9  0.9817161
10 0.9812650

and I'd like to generate a table like this:

acflong  dacflong/dx
1  1.000
2  0.9875858-0.01241  #delta(acflong)/delta(index)
3  0.9871751-0.00041
4  0.9867585-0.00042
5  0.9863358-0.00042
6  0.9859070-0.00043
7  0.9854721-0.00043
8  0.9850316-0.00044
9  0.9817161-0.00033
10 0.9812650   -0.00045

Is there a way to do this in R and how do I eliminate the first line  
of the

data?

Thanks,
-Chris
--
View this message in context: 
http://www.nabble.com/First-Derivative-of-Data-Matrix-tp23012026p23012026.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] taking the log then later exponentiate the result query

2009-04-12 Thread Mike Lawrence
Your problem is that with the alpha  beta you've specified

(((alpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha-2)^(alpha+beta-2))

is

Inf/Inf

which is NaN.



On Sun, Apr 12, 2009 at 5:39 PM, Mary Winter statsstud...@hotmail.com wrote:


  Hi,

 I am trying to figure out the observed acceptance rate and M, using 
 generalised rejection sampling to generate a sample from the posterior 
 distribution for p.

 I have been told my code doesn't work because I need to  take the log of the 
 expression for M, evaluate it and then exponentiate the result. This is 
 because R is unable to calculate high powers such as 545.501.

 As you can see in my code I have tried to taking the log of M and then the 
 exponential of the result, but I clearly must be doing something wrong.
 I keep getting the error message:

 Error in if (U = ratio/exp(M)) { : missing value where TRUE/FALSE needed

 Any ideas how I go about correctly taking the log and then the exponential?

 rvonmises.norm - function(n,alpha,beta) {
 out - rep(0,n)
 counter - 0
 total.sim - 0
 p-alpha/(alpha+beta)
 M 
 -logalpha-1)^(alpha-1))*((beta-1)^(beta-1)))/((beta+alpha-2)^(alpha+beta-2)))
 while(counter  n) {
 total.sim - total.sim+1
 proposal - runif(1)
 if(proposal = 0  proposal = 1) {
 U - runif(1)
 ratio- (p^(alpha-1))*((1-p)^(beta-1))
 if(U =ratio/exp(M)) {
 counter - counter+1
 out[counter] - proposal
 }
 }
 }
 obs.acc.rate - n/total.sim
 return(out,obs.acc.rate,M)
 }
 set.seed(220)
 temp - rvonmises.norm(1,439.544,545.501)
 print(temp$obs.acc.rate)

 Louisa



 Get the New Internet Explore 8 Optimised for MSN. Download Now

 _


        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] p-values from bootstrap - what am I not understanding?

2009-04-12 Thread Matthew Keller
Hi Johan,

Interesting question. I'm (trying) to write a lecture on this as we
speak. I'm no expert, but here are my two cents.

I think that your method works fine WHEN the sampling distribution
doesn't change its variance or shape depending on where it's centered.
Of course, for normally, t-, or chi-square distributed statistics,
this is the case, which is why it's fine to do this using traditional
statistical methods. However, there are situations where this might
not be the case (e.g., there may be a mean-variance relationship), and
since we would like a general method of getting valid p-values that
doesn't depend on strong assumptions, this probably isn't the way to
go. Permutation would seem to work better because you are simulating
the null process. However, figuring out how to permute the data in a
way that creates the null you want while retaining all the
dependencies, missingness patterns, etc in your data can be
difficult/impossible.

Hope that helps...

Matt


On Sun, Apr 12, 2009 at 4:38 PM, Peter Dalgaard
p.dalga...@biostat.ku.dk wrote:
 Johan Jackson wrote:

 Dear stats experts:
 Me and my little brain must be missing something regarding bootstrapping.
 I
 understand how to get a 95%CI and how to hypothesis test using
 bootstrapping
 (e.g., reject or not the null). However, I'd also like to get a p-value
 from
 it, and to me this seems simple, but it seems no-one does what I would
 like
 to do to get a p-value, which suggests I'm not understanding something.
 Rather, it seems that when people want a p-value using resampling methods,
 they immediately jump to permutation testing (e.g., destroying
 dependencies
 so as to create a null distribution). SO - here's my thought on getting a
 p-value by bootstrapping. Could someone tell me what is wrong with my
 approach? Thanks:

 STEPS TO GETTING P-VALUES FROM BOOTSTRAPPING - PROBABLY WRONG:

 1) sample B times with replacement, figure out theta* (your statistic of
 interest). B is large ( 1000)

 2) get the distribution of theta*

 3) the mean of theta* is generally near your observed theta. In the same
 way
 that we use non-centrality parameters in other situations, move the
 distribution of theta* such that the distribution is centered around the
 value corresponding to your null hypothesis (e.g., make the distribution
 have a mean theta = 0)

 4) Two methods for finding 2-tailed p-values (assuming here that your
 observed theta is above the null value):
 Method 1: find the percent of recentered theta*'s that are above your
 observed theta. p-value = 2 * this percent
 Method 2: find the percent of recentered theta*'s that are above the
 absolute value of your observed value. This is your p-value.

 So this seems simple. But I can't find people discussing this. So I'm
 thinking I'm wrong. Could someone explain where I've gone wrong?


 There's nothing particularly wrong about this line of reasoning, or at least
 not (much) worse than the calculation of CI. After all, one definition of a
 CI at level 1-alpha is that it contains values of theta0 for which the
 hypothesis theta=theta0 is accepted at level alpha. (Not the only possible
 definition, though.)

 The crucial bit in both cases is the assumption of approximate translation
 invariance, which holds asymptotically, but maybe not well enough in small
 samples.

 There are some braintwisters connected with the bootstrap; e.g., if the
 bootstrap distribution is skewed to the right, should the CI be skewed to
 the right or to the left? The answer is that it cannot be decided based on
 the distribution of theta* alone since that depends only on the true theta,
 and we need to know what the distribution would have been had a different
 theta been the true one.

 The point is that these things get tricky, so most people head for the safe
 haven of permutation testing, where it is rather more easy to feel that you
 know what you are doing.

 For a rather different approach, you might want to look into the theory of
 empirical likelihood (book by Art Owen, or just Google it).

 --
   O__   Peter Dalgaard             Øster Farimagsgade 5, Entr.B
  c/ /'_ --- Dept. of Biostatistics     PO Box 2099, 1014 Cph. K
  (*) \(*) -- University of Copenhagen   Denmark      Ph:  (+45) 35327918
 ~~ - (p.dalga...@biostat.ku.dk)              FAX: (+45) 35327907

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Matthew C Keller
Asst. Professor of Psychology
University of Colorado at Boulder
www.matthewckeller.com

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible 

Re: [R] First Derivative of Data Matrix

2009-04-12 Thread spencerg
 However, estimating derivatives from differencing data amplifies 
minor errors.  Less noisy estimates can be obtained by first smoothing 
and then differentiating the smooth.  The fda package provides 
substantial facilities for this. 

 Hope this helps. 
 Spencer Graves


David Winsemius wrote:
delta(index) is identically 1, so taking first differences is all that 
is needed. If the dtatframe's name is df then:


df$dacflong_dx - c(NA, diff(acflong)) # the slash would not be a 
legal character in a variable name unless you jumped through some 
hoops that appear entirely without value


If you want to get rid of the first line of df then

df[-1]



__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] goodness of fit between two samples of size N (discrete variable)

2009-04-12 Thread David Winsemius


On Apr 12, 2009, at 3:09 PM, jose romero wrote:



Hello list:

I generate by simulation (using different procedures) two sample  
vectors of size N, each corresponding to a discrete variable and I  
want to text if these samples can be considered as having the same  
probability distribution (which is unknown).  What is the best test  
for that?
I've read that Kolmogorov-Smirnov and Anderson-Darling tests are  
restricted to continuous data (http://cran.r-project.org/doc/contrib/Ricci-distributions-en.pdf 
), while chi-square can handle discrete data, but how do i test (in  
R) equivalence of ditribution in 2 samples using it? Are there  
better tests than those i mentioned?


The question of whether two discrete samples are independent,  
conditional on their joint marginals is generally handled with a chi- 
square test. The theoretical distribution is only approximately chi- 
square, but is seems close enough that most people will accept it.  
This is not a test of equivalence. Ricci deals with the cases where  
one sample is fitted to a theoretical distribution. You do not seem to  
have that situation.


?chisq.test

I find myself wondering to what purpose you are seeking these answers.

David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Quantative procedure assessing if data is normal

2009-04-12 Thread Henry Cooper

Hi,

 

As part of an R code assingment I have been asked to find a quantitative 
procedure for assessing whether or not the data are normal?

 

I have previously used the graphical procedure using the qqnorm command.

 

Any help/tips would be greatly appreciated as to how I should start going about 
this!

 

Henry

 

 

_


[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Physical Units in Calculations

2009-04-12 Thread Tom La Bone

Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally thinking in R rather than thinking in
Mathcad and trying to translating to R. Anyway, the only task I still use
Mathcad for is calculations that involve physical quantities and units. For
example, in Mathcad I can add 1 kilometer to 1 mile and get the right answer
in the units of length I choose. Likewise, if I try to add 1 kilometer to 1
kilogram I get properly chastised. Is there a way in R to assign quantities
and units to numbers and have R keep track of them like Mathcad does? 

Tom
-- 
View this message in context: 
http://www.nabble.com/Physical-Units-in-Calculations-tp23016092p23016092.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] First Derivative of Data Matrix

2009-04-12 Thread thaumaturgy

David,
Thank you!  

-Chris
-- 
View this message in context: 
http://www.nabble.com/First-Derivative-of-Data-Matrix-tp23012026p23015941.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] p-values from bootstrap - what am I not understanding?

2009-04-12 Thread Robert A LaBudde
There is really nothing wrong with this approach, which differs 
primarily from the permutation test in that sampling is with 
replacement instead of without replacement (multinomial vs. multiple 
hypergeometric).


One of the issues that permutation tests don't have is bias in the statistic.

In order for bootstrap p-values to be reasonably accurate, you need a 
reasonable dataset size, so that sampling with replacement isn't a 
big effect, and so that enough patterns arise in resampling. It also 
helps if the data is continuous instead of categorical or binary.


The same issues affect permutation tests, but untroubled by bias.

The usual methods for p-values (e.g., see Fisher's test in Agresti's 
Categorical Analysis) work here. Typically there is some ambiguity on 
how to treat the values equal to the observed statistic. If you 
include it, the p-value is conservative for rejection. If you don't, 
it's liberal for rejection. If you include 1/2 weight, it averages 
correctly in the long run.


Ditto for 2-tailed p-values vs. single tails. Several different 
methods (some of which you listed) are used.


As a general rule, if you have data from which you wish a p-value, a 
permutation (i.e., without replacement) test is used, but for 
confidence intervals, bootstrapping (i.e., with replacement) is used.


For reasonably large datasets, both methods will agree closely. But 
permutation tests are typically used for smaller size datasets. 
(Think binomial vs. hypgeometric distributions for p-values, and when 
they agree.)


At 05:47 PM 4/12/2009, Johan Jackson wrote:

Dear stats experts:
Me and my little brain must be missing something regarding bootstrapping. I
understand how to get a 95%CI and how to hypothesis test using bootstrapping
(e.g., reject or not the null). However, I'd also like to get a p-value from
it, and to me this seems simple, but it seems no-one does what I would like
to do to get a p-value, which suggests I'm not understanding something.
Rather, it seems that when people want a p-value using resampling methods,
they immediately jump to permutation testing (e.g., destroying dependencies
so as to create a null distribution). SO - here's my thought on getting a
p-value by bootstrapping. Could someone tell me what is wrong with my
approach? Thanks:

STEPS TO GETTING P-VALUES FROM BOOTSTRAPPING - PROBABLY WRONG:

1) sample B times with replacement, figure out theta* (your statistic of
interest). B is large ( 1000)

2) get the distribution of theta*

3) the mean of theta* is generally near your observed theta. In the same way
that we use non-centrality parameters in other situations, move the
distribution of theta* such that the distribution is centered around the
value corresponding to your null hypothesis (e.g., make the distribution
have a mean theta = 0)

4) Two methods for finding 2-tailed p-values (assuming here that your
observed theta is above the null value):
Method 1: find the percent of recentered theta*'s that are above your
observed theta. p-value = 2 * this percent
Method 2: find the percent of recentered theta*'s that are above the
absolute value of your observed value. This is your p-value.

So this seems simple. But I can't find people discussing this. So I'm
thinking I'm wrong. Could someone explain where I've gone wrong?


J Jackson

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.



Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: r...@lcfltd.com
Least Cost Formulations, Ltd.URL: http://lcfltd.com/
824 Timberlake Drive Tel: 757-467-0954
Virginia Beach, VA 23464-3239Fax: 757-467-2947

Vere scire est per causas scire

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Clustered data with Design package--bootcov() vs. robcov()

2009-04-12 Thread jjh21

Hi,

I am trying to figure out exactly what the bootcov() function in the Design
package is doing within the context of clustered data. From reading the
documentation/source code it appears that using bootcov() with the cluster
argument constructs standard errors by resampling whole clusters of
observations with replacement rather than resampling individual
observations. Is that right, and is there any more detailed documentation on
the math behind this? Also, what is the difference between these two
functions:

bootcov(my.model, cluster.id)
robcov(my.model, cluster.id)

Thank you.
-- 
View this message in context: 
http://www.nabble.com/Clustered-data-with-Design-package--bootcov%28%29-vs.-robcov%28%29-tp23016400p23016400.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Quantative procedure assessing if data is normal

2009-04-12 Thread Mike Lawrence
www.rseek.org
normality test

On Sun, Apr 12, 2009 at 8:45 PM, Henry Cooper henry.1...@hotmail.co.uk wrote:

 Hi,



 As part of an R code assingment I have been asked to find a quantitative 
 procedure for assessing whether or not the data are normal?



 I have previously used the graphical procedure using the qqnorm command.



 Any help/tips would be greatly appreciated as to how I should start going 
 about this!



 Henry





 _


        [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.




-- 
Mike Lawrence
Graduate Student
Department of Psychology
Dalhousie University

Looking to arrange a meeting? Check my public calendar:
http://tinyurl.com/mikes-public-calendar

~ Certainty is folly... I think. ~

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Cross-platforms solution to export R graphs

2009-04-12 Thread cls59


Philippe Grosjean wrote:
 
 
 ..I would be happy to receive your comments and suggestions to improve 
 this document.
 All the best,
 
 PhG
 
 

LaTeX is my personal tool of choice and the vector format I use most often
is  http://sourceforge.net/projects/pgf/ PGF  (Portable Graphics Format),
implemented via a LaTeX package written by Till Tantu. There exists a very
nice converter called  http://sourceforge.net/projects/eps2pgf/ eps2pgf 
which is written in java and does an excellent job of translating R eps
output. The primary advantage of PGF is that figure text gets typeset by the
LaTeX engine instead of by R which unifies font choices and gives the final
document a very consistent, professional look. LaTeX commands, such as
mathematical typesetting, can also be embedded in the figure.

Along with a friend of mine, I have been working on a R package that extends
Sweave to include pgf graphics output. Currently 
http://www.rforge.net/pgfSweave pgfSweave  uses eps2pgf to perform the
conversions but a native R graphics device is planned to help speed up the
process. The package is currently very much a beta and has been developed
and tested on Mac OS X and runs quite well. Limited testing has been
conducted on Linux and Windows and we have produced documents on those
systems. Heavy development is expected to take place this summer.

PGF is a human-readable format and can be be easily annotated by adding
additional commands to the resulting file. However, editing the original
content is possible but difficult due to the lack of structure in the
eps2pgf output. The LaTeX environment can even be switched from pgfpicture
to tikzpicture which allows the use of TiKZ- a high level graphics language
built on top of PGF. TiKZ/PGF is easy to learn and the manual is one of the
best pieces of software documentation I have seen. 

Since I came across PGF a couple of years ago, Adobe Illustrator has
languished unused on my hard drive except for the occasional application of
Live Trace. An excellent showcase of PGF/TiKZ examples along with additional
tools is hosted at  http://www.texample.net Texample .

The end result of the PGF/TiKZ build process is a PDF which makes it very
portable.

All the best!

-Charlie

-
Charlie Sharpsteen
Undergraduate
Environmental Resources Engineering
Humboldt State University
-- 
View this message in context: 
http://www.nabble.com/Cross-platforms-solution-to-export-R-graphs-tp22970668p23016682.html
Sent from the R help mailing list archive at Nabble.com.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Physical Units in Calculations

2009-04-12 Thread Robert A LaBudde

At 08:00 PM 4/12/2009, Tom La Bone wrote:


Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally thinking in R rather than thinking in
Mathcad and trying to translating to R. Anyway, the only task I still use
Mathcad for is calculations that involve physical quantities and units. For
example, in Mathcad I can add 1 kilometer to 1 mile and get the right answer
in the units of length I choose. Likewise, if I try to add 1 kilometer to 1
kilogram I get properly chastised. Is there a way in R to assign quantities
and units to numbers and have R keep track of them like Mathcad does?


Yes, but it's a lot of work: Create objects with units as an attribute.

Perhaps someone else can tell you if such a set of definitions already exists.


Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: r...@lcfltd.com
Least Cost Formulations, Ltd.URL: http://lcfltd.com/
824 Timberlake Drive Tel: 757-467-0954
Virginia Beach, VA 23464-3239Fax: 757-467-2947

Vere scire est per causas scire

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] value of strptime in R 1.8.0

2009-04-12 Thread Mitra Jazayeri
Dear R friends,
I have a data frame, I need to get a time interval between the two columns.
The times are recorded in 24 hour clock. My data frame is called
version.one.
my commands are:
t.s.one-paste(version.one[,9])
t.s.two-paste(version.one[,61])
x-strptime(t.s.one,format=%H:%M)
x
y-strptime(t.s.two ,format=%H:%M)
y
z-difftime(y,x, units = mins)
z

But now in my z object i have negative numbers. Would you please let me know
why? And how can I get rid of this problem?
Thanks
Mitra

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Fwd: modelling a nested student-school-district model in lmer

2009-04-12 Thread lei chen
Dear all,
 I have a dataset with students nested in schools and also schools belong
 to each district. The data was explicitly nested as previous examples.
 In my case, I don't care the variance between schools or district,and I
 just want to assess the effect of different teaching
methods,traditionally,the model can be specified in lmer like :
 lmer(score~method+(1|district/school),data)
 notes: the method was a factor(m1...m10)
 in my study ,I want to know the variance between methods,and I also use
some
covariates at method level to explain the variance between methods.
 I construct the unconditional model and conditional models like these:
unconditional model :lmer(score~1+(1|method)+(1/district/school),data)
 conditional model :lmer(score~1+CK+(1|method)+(1/district/school),data)
 the CK indicates the evaluation score for each method.

What I want to confirm is whether the specification of all the models was
 reasonable and correct in lmer.
Any help will be appreciated.
 yours,
 Lei Chen

[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Physical Units in Calculations

2009-04-12 Thread Bill.Venables
Is there anything available off the shelf in R for this?  I don't think so.

It is, however, an interesting problem and there are the tools there to handle 
it.  Basically you need to create a class for each kind of measure you want to 
handle (length, area, volume, weight, and so on) and then overload the 
arithmetic operators so that they can handle arguments of the appropriate 
class.  This may be a case where S4 classes do have a distinct advantage, as 
they can more easily dispatch methods on combinations of classes rather than 
the class of a single argument, as in the case of S3.

It looks like an interesting problem, in fact, but with the potential to get 
well out of hand if you set your sights too general.  Limited versions would be 
simple enough, though.  Something like it exists for times in the POSIXt and 
Date classes, of course, but with many limitations.  For example you cannot 
divide one time difference by another to get a pure number, but you can divide 
one by a pure number to get another time difference.  This could be remedied, 
of course.

I was about the say that this is another case where the USA is mainly to blame, 
*again*, because of the dogged clinging to an outdated system of weights and 
measures, (not to mention the peverse practice putting the month *first* in 
their date format), but it's not entirely true.   The UK uses metres for most 
lengths but miles for road distances - the worst of all worlds.  They even 
measure fuel performance in litres per 100 *miles*, if you can believe it.


Bill Venables
http://www.cmis.csiro.au/bill.venables/ 


-Original Message-
From: r-help-boun...@r-project.org [mailto:r-help-boun...@r-project.org] On 
Behalf Of Robert A LaBudde
Sent: Monday, 13 April 2009 12:20 PM
To: Tom La Bone
Cc: r-help@r-project.org
Subject: Re: [R] Physical Units in Calculations

At 08:00 PM 4/12/2009, Tom La Bone wrote:

Back in 2005 I had been doing most of my work in Mathcad for over 10 years.
For a number of reasons I decided to switch over to R. After much effort and
help from you folks I am finally thinking in R rather than thinking in
Mathcad and trying to translating to R. Anyway, the only task I still use
Mathcad for is calculations that involve physical quantities and units. For
example, in Mathcad I can add 1 kilometer to 1 mile and get the right answer
in the units of length I choose. Likewise, if I try to add 1 kilometer to 1
kilogram I get properly chastised. Is there a way in R to assign quantities
and units to numbers and have R keep track of them like Mathcad does?

Yes, but it's a lot of work: Create objects with units as an attribute.

Perhaps someone else can tell you if such a set of definitions already exists.


Robert A. LaBudde, PhD, PAS, Dpl. ACAFS  e-mail: r...@lcfltd.com
Least Cost Formulations, Ltd.URL: http://lcfltd.com/
824 Timberlake Drive Tel: 757-467-0954
Virginia Beach, VA 23464-3239Fax: 757-467-2947

Vere scire est per causas scire

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Running random forest using different training and testing schemes

2009-04-12 Thread Chrysanthi A.
Thanks a lot for your help..
But, using this function, how can identify the size of the training set? and
how will I identify my data? There is not any example and I am a bit
confused.

Many thanks,

Chrysanthi


2009/4/12 Max Kuhn mxk...@gmail.com

 There is also the train function in the caret package. The trainControl
 function can be used to try different resampling schemes. There is also a
 package vignette with details.

 Max



 On Apr 12, 2009, at 12:26 PM, Chrysanthi A. chrys...@gmail.com wrote:

  Hi,

 I would like to run random Forest classification algorithm and check the
 accuracy of the prediction according to different training and testing
 schemes. For example, extracting 70% of the samples for training and the
 rest for testing, or using 10-fold cross validation scheme.
 How can I do that? Is there a function?

 Thanks a lot,

 Chrysanthi.

   [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] Re : Running random forest using different training and testing schemes

2009-04-12 Thread Chrysanthi A.
Hi Pierre,

Thanks a lot for your help..
So, using that script, I just separate my data in two parts, right? For
using as training set the 70 % of the data and the rest as test, should I
multiply the n with the 0.70 (for this case)?

Many thanks,

Chrysanthi



2009/4/12 Pierre Moffard pier.m...@yahoo.fr

 Hi Chysanthi,

 check out the randomForest package, with the function randomForest. It has
 a CV option. Sorry for not providing you with a lengthier response at the
 moment but I'm rather busy on a project. Let me know if you need more help.

 Also, to split your data into two parts- the training and the test set you
 can do (n the number of data points):
 n-length(data[,1])
 indices-sample(rep(c(TRUE,FALSE),each=n/2),round(n/2),replace=TRUE)
 training_indices-(1:n)[indices]
 test_indices-(1:n)[!indices]
 Then, data[train,] is the training set and data[test,] is the test set.

 Best,
 Pierre
 --
 *De :* Chrysanthi A. chrys...@gmail.com
 *À :* r-h...@r-project..org
 *Envoyé le :* Dimanche, 12 Avril 2009, 17h26mn 59s
 *Objet :* [R] Running random forest using different training and testing
 schemes

 Hi,

 I would like to run random Forest classification algorithm and check the
 accuracy of the prediction according to different training and testing
 schemes. For example, extracting 70% of the samples for training and the
 rest for testing, or using 10-fold cross validation scheme.
 How can I do that? Is there a function?

 Thanks a lot,

 Chrysanthi.

 [[alternative HTML version deleted]]

 __
 R-help@r-project.org mailing list
 https://stat.ethz.ch/mailman/listinfo/r-help
 PLEASE do read the posting guide
 http://www.R-project.org/posting-guide.html
 and provide commented, minimal, self-contained, reproducible code.



[[alternative HTML version deleted]]

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


Re: [R] value of strptime in R 1.8.0

2009-04-12 Thread David Winsemius

Mitra;

n Apr 12, 2009, at 8:09 PM, Mitra Jazayeri wrote:


Dear R friends,
I have a data frame, I need to get a time interval between the two  
columns.

The times are recorded in 24 hour clock. My data frame is called
version.one.
my commands are:
t.s.one-paste(version.one[,9])
t.s.two-paste(version.one[,61])
x-strptime(t.s.one,format=%H:%M)
x
y-strptime(t.s.two ,format=%H:%M)
y
z-difftime(y,x, units = mins)
z

But now in my z object i have negative numbers. Would you please let  
me know

why?


Given that neither version.one[,9] nor version[,61] is available to  
us, how can readers of this be expected to answer  other than with the  
trivial hypothesis that t.s.one is after t.s.two?



And how can I get rid of this problem?


Change the order of the calculation?
(You cannot use abs, since that function is not defined for difftime  
objects.)


--
David Winsemius, MD
Heritage Laboratories
West Hartford, CT

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.


[R] Using trace

2009-04-12 Thread Stavros Macrakis
I would like to trace functions, displaying their arguments and return
value, but I haven't been able to figure out how to do this with the
'trace' function.

After some thrashing, I got as far as this:

fact - function(x) if(x1) 1 else x*fact(x-1)
tracefnc - function() dput(as.list(parent.frame()),  #
parent.frame() holds arg list
control=NULL)
trace(fact,tracer=tracefnc,print=FALSE)

but I couldn't figure out how to access the return value of the
function in the 'exit' parameter.  The above also doesn't work for
... arguments.  (More subtly, it forces the evaluation of promises
even if they are otherwise unused -- but that is, I suppose, a weird
and obscure case.)

Surely someone has solved this already?

What I'm looking for is something very simple, along the lines of
old-fashioned Lisp trace:

 defun fact (i) (if ( i 1) 1 (* i (fact (+ i -1)
FACT
 (trace fact)
(FACT)
 (fact 3)
  1 (FACT 3)
2 (FACT 2)
  3 (FACT 1)
4 (FACT 0)
4 (FACT 1)
  3 (FACT 1)
2 (FACT 2)
  1 (FACT 6)
6

Can someone help? Thanks,

 -s

__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.